Imagine how much memory space a 10k-pixel comic book takes up. And now imagine that you can’t compress it because if you did, it would lose too much quality and become unreadable. Curious how we at FUNCORP deal with this? Then read on!
Photo by Caspar Camille Rubin on Unsplash
Our flagship app, iFunny, is designed to showcase user-generated content. It can be videos, gifs or images. Very big images. And Android, in general, is pretty bad at handling large images.
One of the issues we encountered was a Canvas bitmap size limit. On older devices, errors could start popping up when an image side exceeded 2k pixels, and we would also get Canvas#getMaximumBitmapHeight and Canvas#getMaximumBitmapWidth runtime constraints.
We got around it by slicing each image in the memory and drawing it separately piece by piece.
The final image is assembled from fragments of acceptable sizes, kind of like a patchwork quilt.
This trick solves the problem of drawing large images, but does not get rid of memory overflow. To avoid it, we’ve decided to load images like mapping apps load terrain. A large image is split into smaller pieces, which are then loaded into RAM and shown to the user.
Our first idea was to slice up images into separate files after they have been uploaded. We would upload a whole image to RAM once, and it would then load only visible tiles. This method made managing tile files quite hard because the cache was limited and we had to regularly clean it.
After some poking around we discovered that Android had a special mechanism for loading image pieces into bitmap: BitmapRegionDecoder.
The first versions of the algorithm didn’t run too smoothly because it took them too long to load separate slices. But we managed to speed it up.
We paralleled the loading of each tile so that decoding a single fragment wouldn’t take too long
This helped us to easily manipulate multithreading using different Scheduler types.
The second optimization allowed us to solve the problem of long initialization of the decoder itself, i.e. BitmapRegionDecoder#newInstance().
There is a nuance concerning the reuse of BitmapRegionDecoder: each instance by itself doesn’t allow to execute decodeRegion in parallel. So you can’t just have one decoder instance.
Our solution was to set up a BitmapRegionDecoder pool so that we didn’t have to spend extra time on initialization when loading each tile.
This approach also works well with BitmapPool. It allows us not to allocate images every time, but to take the instances we want to change from the pool. This feature has helped us to significantly boost the performance of our application. When the user scrolls through the sliced image, we have to reload quite a few images of the same size.
This is how the image looks in iFunny. The red lines are tile boundaries; the blue lines show the visible area; and in the corner you can see the number of allocated bitmaps. And here is the whole content.
Using the pool allowed us not only to save RAM space, but also to make our app run much smoother. Since comics in iFunny are long vertical images, it is easier for us to slice them into narrow horizontal pieces. This allows us to remove as many unnecessary tiles from RAM as we can.
Unfortunately, BitmapRegionDecoder performance may vary depending on your Android version. For instance, versions 5 and 6 had decoding artifacts. The culprit? Our way of backend image encoding with mozjpeg. However, we’ve decided not to look into it further as very few of our users have such devices.
As a result, the difference in memory use is noticeable even in the AndroidStudio profiler and amounts to about 10–15% for each image. We are currently experimenting with production and have already seen some decrease in OutOfMemory and ANR.
We plan to continue our growth and development by entering new markets and finding new business niches. Take a look at open positions. Perhaps there is one that is right for you!
If you know a passionate Software Developer who's looking for job opportunities, e-mail us at job@fun.co. In case of successful recommendation you will get a $3000 reference fee.