Last modified 11 years ago Last modified on 11/11/09 14:00:33

Error: Macro BackLinksMenu(None) failed
compressed data is corrupt

Error: Macro TicketQuery(summary=NFR_TEXT_PERFORMANCE_R0, format=table, col=summary|owner|status|type|component|priority|effort|importance, rows=description|analysis_owners|analysis_reviewers|analysis_score|design_owners|design_reviewers|design_score|implementation_owners|implementation_reviewers|implementation_score|test_owners|test_reviewers|test_score|) failed
current transaction is aborted, commands ignored until end of transaction block



Operations over text are currently very slow. These include:

  • Operations with text frames (with some text in it, not empty) - rotate, move, zOrder change, insert, delete, resize, etc.
  • Pasting of large text resources in text frames (10000 words, for example) as well as writing in frames with large texts. These are both modification of the text resource.
  • All the above mentioned operations over chained frames are even slower.
  • Auto-chaining is extremely slow.
  • Saving / loading a book with large texts.
  • A large quantity of small text resources (for example, 100 pages with text frames in them, each of them having a page of text).

Task requirements

  • Make creation of a text layout faster. Currently it does not re-use previous layouts. It even has a lots of unneccessary calculations in it which slows down the layout a lot (Hint: createLineBreakMeasurer()).
  • Make auto chaining process faster. It will certainly become much faster with introducing the layout reusage, but the algorithm of auto chaining is not quite effective anyway.
  • Inspect and try to optimize the overall performance with lots of text resources. Currently, a book with 100 pages of text (even not chained) in totally unusable.

Task result

Significantly faster operations (modify, rotate, resize, reflow, move, and others) over normal and chained text. Fast auto-chaining process.

Implementation idea

After a prototypical optimizaitons in layout re-usage, it was noticed that chaining of 100 pages of text is still slow. The reason (this time) was not in the layout, but in the creation of the 100th page with the 100th frame in it. This took about 2 seconds on my computer, which is dozens of times slower than the first frame. So, this task should not only solve layout problems.


How to demo

Start Sophie, insert a text frame with the page size, paste the first half of "Под игото" by Ivan Vazov and try to auto-chain it.This should be about 150 frames of chained text. If it performs considerably fast, save the book and re-open it in a new instance of Sophie.


The current auto-chaining algorithm is ineffective. It adds a page to the book, in another auto action inserts a frame in it, in another one applies the template and chains it to the head frame. If then the text is not laid out completely, repeats the steps. Every action causes updates in che GUI, adding a head text frame to a chain even destroys the old view and creates a tail view, which then recomputes. This can be avoided if the static helper methods are not used - we can create a tail text frame and add it to the chain directly. Furthermore, all the pages can be added at once - for this action we need a helper method in HotLayout which makes a "dry-run" of the layout in order to calculate the count of needed areas. Unfortunately, adding all the frames in one action does not have any effect - every add causes an update. But head frames will not be created, frame kinds will not be changed and all the pages will be added at once. This is much faster. Another problem is that the layout is slow. It is created in the HeadTextFrameViews every time a change in the chain occurs. This includes adding a new frame to the chain. The good thing here is that reusing the previously created areas solves this problem. We consider an area layout reusable, if:

  • The area used for the layout is the same (ImmArea has equals());
  • The area is full, e.g. no more text can be laid out in it;
  • The area does not contain the beginning of the interval, where the text has changed;
  • The text we want to lay out is exactly the same (has the same styled hash) or has the same ID as the laid out one (this means that it derives from it).

In this case, there is no need of creating a new area layout. The effect is that when a frame is added to the chain, the layout uses all the previously created areas and creates only the new one. There is a slowdown in large text when creating layout for them. The main reason is that for every line it creates a LineBreakMeasurer. The measurer needs an AttributedCharacterIterator, and it can be created from an AttributedString. The string needs a StringBuilder, which uses all the characters in the text in order to provide an interator for a single line. This is done for every line until the end of the text, for every area. For a text with more than 100 000 chars, this is very slow. The problem can be solved by re-using the attributed string in every sub-text of a given one. It is the same and does not need to be re-created. The lines are created by using subText(), so a single AttributedString will be made for every change. Another issue is the method HotAreaLayout.splitAreaToLineTexts(). When creating an area layout, the whole text left is given. This method breaks all the text into lines and performs some operations on some of them. It would be better to generate the lines one by one and stop when the area is full. For this purpose, a LineIterator can be created, which will have only one method: nextLine(). It returns the text until the next line break and remembers its new position. Creating text, line and segment layouts themselves is actually quite fast. But, the create() method of HotAreaLayout it much more time-consuming. The main reason is the nextLayout() method in LineBreakMeasurer, which takes up to 30-40% of the time needed for creation. Re-using measurers surely lowers the time needed, and it is probably memory effective. So, in TextUtils, modify the createLineBreakMeasurer() so that it uses s QueryCache with the size of 4096 elements (the default size of 1024 is not enough for a text with about 100 pages). The measurer is constructed over a piece of ImmHotText, so its hash can be the same as the resutl of getStyledHash() of that text. Since the measurer remembers its current position, before returning it as a result, the position must be set at the beginning of the string. But there is a problem with this - 2 texts with same content and different positions in their parents will have the same styled hash, but their ACI`s will have different start indexes. That's why the start index of the ACI generated by a text must always be 0. So, update the ImmHotText.toAci() so that it creates an attributedString of the current text, creates the ACI from it, then creates another attributedString from that ACI (it becomes a subString of the previous) and returns its ACI. The resulting object will have indexes relative to the current text object, not to the parent text.

Unfortunately, there is no time left to improve overall performance. Instead, some other small changes, which were noticed while looking around the code, will be performed (if the reviewers agree with this):

  • In HeadTextFrameView, getAreas() could take the areas from an AutoProp areas(). This way, the area list should not be recomputed when the text changes (currently, getAreas() is called by textLayout(), which also tracks the text).
  • The bookView could hold a list of all the PageViews. I know this breaks the lazy loading idea, but in the other case, every head text frame view computes them (the page preview palette does so, too).
  • In PwaSelector, the select() methods can be made a bit faster - they can work with only 1 set (currently they have 2).
  • In SceneHelper.findElementPath, a full DFS is performed every time, even if the element path is found on the 1st iteration. I don't know how much more effective will be putting a return statement, but I think it will not be problematic :)

There is a test case for LBMs caching in the branch (branches/private/kyli/text), which is here.


The implementations is in branches/private/kyli/text/sophie2-platform. The size of the LBM cache should be around the possible size of the LBMs, which I guess is equal to the number of all the line breaks in the application. I tested with 100 pages of text and 1024 breaks were too little for that. But I didn't notice that 2 book with 100 pages each will probably need more cache. So, I cannot determine the size, but a comment was put there and it can be easily changed.

P.S. Moving the getAreas() in autoprop and making pageViews() in the book view turned out to make book opening 2 times slower, so I reverted the changes there..

Merged to the trunk at [8021]


(Place the testing results here.)


(Write comments for this or later revisions here.)