Hello All,
doc.text("very large url.....") is failing when the data used is a very large URL string. After debugging into the issue, I found that the root cause of the issue is
while w > @spaceLeft
w = @wordWidth word.slice(0, --l)
line 77 in lib/line_wrapper.coffee.
If the string is very long say 1000 characters, base on the fontsize, character spacing and other factors which influence the word width, this loop will execute so many times causing the heap out of memory exception.
Pull request #685
is one way to reduce the number of cycles of finding the length of string which fits within the available space.
Can you please check if this PR can be merged or please let us know if there is another solution to fix this issue?
Regards,
Kesava T