Skia is a great library, but as all things Google it's a pain to build. They don't use CMake and building it from source takes 20-30 minutes on a modern laptop. Furthermore, it's constantly changing its APIs and much of it is undocumented and unclear on how to use optimally. Most of the decisions taken by development team aren't discussed in the open and this makes it hard to understand the codebase.
I wish there was a nice and small vector graphics library with GPU acceleration. So far Skia is the only real option, despite its downsides.
I have an open source project that uses Skia, and I just keep static libraries for all target platforms because the Skia build process is so painful.
Maybe once a year I bite the bullet, do a new Skia build on all the platforms, and then I have to figure out how the C++ API has changed. At least that’s just rote work of fixing compiler errors by looking at the new header files.
Even though it’s a pain in the ass, I still use Skia because it’s got the best combination of performance and features. Sadly Cairo doesn’t quite compete. Skia gives my project a pretty good guarantee that 2D graphics render like in Chrome, and that’s important for this use case.
The Skia C++ API changes quite a lot. If I didn't sync up regularly, the tech debt could become a problem down the line, for example if I want to add a platform that does need a new build of the library.
Yes. So in Sciter I've replaced its build system with relatively simple premake5 script. That replacement took couple of days but was worth it. Premake5 generates human-readable IDE solutions and make files. So you need just a compiler to build the whole thing.
> building it from source takes 20-30 minutes on a modern laptop.
It is not that bad actually. Just tried full rebuild of x64/Windows version:
Whole sciter.dll (HTML/CSS/JS/Graphics) with Skia backend:
sciter.dll build completed at 9:11 AM and took 07:03.415 minutes
Same sciter.dll but with Direct2D backend:
sciter.dll build completed at 9:22 AM and took 02:34.412 minutes
So Skia takes ~4 minutes to build on pretty average development desktop machine.
> it's constantly changing its APIs
That's very true and is a pain indeed if to change its version frequently. Yet there is no such concept as "Skia version" - just revisions/milestones. It used to be an attempt to make stable plain C API but AFAIR it was removed recently.
premake5 is a monolithic/portable executable that contains Lua + specific runtime.
Thus it does not rely on installed Python as in GN case as other tools.
Having standard and well known and documented Lua on board benefits the maker a lot. In my opinion any modern build system must include generic and known PL.
And that above is the problem of modern CMake. It started as simple static declarative thing but life forced it to evolve into dynamic programming language with very strange notation and runtime model.
According to their benchmark, multi-threaded rendering on a Ryzen 7950X will take about 1.7ms with 4 cores for drawing 1000x polygons with 40 vertices each on a 32x32 px area, which seems like a reasonable approximation for a text character on a high-DPI display. The default font size in JetBrains fits about 2500 characters onto my screen, so I'd expect a 4.25ms frame time, meaning I am capped at 235 FPS with 4 CPU cores running at full speed.
I believe the best way is probably to use Blend2D for rendering glyph bitmaps and then compositing them into the full text on GPU.
Sadly, CPU memory is still plenty slow compared to GPU memory and when you need to copy around 100 MB images (4K RGB float), then that quickly becomes the limiting factor.
Text rendering is something that will get improved in the future.
At the moment when you render text Blend2D queries each character from the font and then rasterizes all the edges and runs a pipeline to composite them. All these steps are super optimized (there is even a SIMD accelerated TrueType decoder, which I have successfully ported to AArch64 recently), so when you compare this approach against other libraries you still get like 4-5x performance difference in favor of Blend2D, but if you compare this method against cached glyphs Blend2D loses as it has to do much more work per glyph.
So the plan is to use the existing pipeline for glyphs that are larger (let's say 30px+ vertically) and to use caching for glyphs that are smaller, but how it's gonna be cached is currently in research as I don't consider simple glyph caching in a mask a great solution (it cannot be sub-pixel positioned and it cannot be rotated - and if you want that subpixel positioned the cache would have to store each glyph several times).
There is a demo application in blend2d-apps repository that can be used to compare Blend2D text rendering vs Qt, and the caching Qt does is clearly visible in this demo - when the text is smaller Qt renders it differently and characters can "jump" from one pixel to another when the font size is slightly scaled up and down, so Qt glyph caching has its limits and it's not nice when you render animated text, for example. This is a property that I consider very important so that's why I want to design something better than glyph masks that would be simple to calculate on CPU. One additional interesting property of Qt glyph caching is that once you want to render text having a size that was not cached previously, something in Qt takes 5ms to setup, which is insane...
BTW one nice property of Blend2D text rendering is that when you use the multithreaded rendering context the whole text pipeline would run multithreaded as well (all the outline decoding, GSUB/GPOS processing, rasterization, etc...).
Cairo is in a maintenance-only mode. Nobody develops this library anymore and it only has a maintainer or two. Since nobody really worked on Cairo in the past 15 years it's not optimized for modern hardware.
You can see some existing benchmarks here:
- https://blend2d.com/performance.html
Both the benchmarking tool and Blend2D are open-source projects so anyone can verify the numbers presented are indeed correct, and anyone can review/improve the backend-specific code that is used by the benchmarking tool.
That’s crazy. I once lurked in the IRC of the project. I knew the creator. He was a family friend. I was a silly teen kid toying with Linux he was a dev who worked at redhat and lived in the same town as me.
I think that when it comes to 2D rendering libraries there is in general not too many options if you want to target CPU or both CPU+GPU. Targeting GPU-only is bad for users that run on a hardware where GPU doesn't perform well or is not available at all due to driver issues or just not present (like servers).
If you consider libraries that offer CPU rendering there are basically:
- AGG (CPU only)
- Blend2D (CPU only, GPU planned, but not now)
- Cairo (CPU only)
- Qt's QPainter (CPU only, GPU without anti-aliasing / deprecated)
- Skia (CPU + GPU)
- Tiny Skia (CPU only, not focused on performance)
- GPU only libs (there is many in C++ and Rust)
Nobody develops AGG and Cairo anymore and Qt's QPainter hasn't really improved in the past decade (Qt Company's focus is QtQuick, which doesn't use QPainter, so they don't really care about improving the performance of QPainter). So, only 2 libraries from this list have active development - Blend2D and Skia.
As an author of Blend2D I hope that it will be a go-to replacement for both AGG and Cairo users. Architecturally, Blend2D should be fine after a 1.0 release as the plan is to offer a stable ABI with 1.0 - And since Blend2D only exports C-API it should be a great choice for users who want to use every cycle and who want their code to work instead of making changes every time the dependency is updated (hello Skia).
At the moment Blend2D focuses on AGG users though, because AGG is much more widespread in commercial applications due to its licensing model and extensibility. However, AGG is really slow especially when rendering to large images (like 4K) so switching from AGG to Blend2D can offer a great performance benefits while avoiding other architectural changes of the application itself.
BTW Blend2D is still under active development. It started as an experiment and historically it only offered great performance on X86 platforms, but that is changing with a new JIT backend, which provides both X86 and AArch64 support and is almost ready for merge. This is good news as it will enable great performance on Apple hardware and also other AArch64 devices, basically covering 99% of the market.
It's a tiny single-header C++ library in the style of the STB libraries. My aim was to make it dirt simple to be able to drop into almost any project and get high-quality rendering while providing an API comfortable to those used to <canvas>.
I've been checking out Blend2D every now and then. It seems like a very nice option for the bigger, but faster and more fully-featured end of the spectrum.
(Though for what it's worth, while raw performance isn't my priority, my little library still can hit about 70fps rendering the Postscript Tiger to 733x757 res with a single thread on my 7950x. :-)
BTW for comparison - Blend2D can render SVG tiger in 1.68ms on the same machine (I also have 7950X) so it can provide almost an order of magnitude better performance in this case, which is great I think. But I understand the purpose of your library, sometimes it's nice to have something small :)
Do not forget: https://www.amanithvg.com (I'm one of the authors, 20+ years of active development). Full OpenVG 1.1 API, CPU only, cross-platform and analytical coverage antialiasing (rendering quality) as main feature. The rasterizer is really fast. I swear ;)
At Mazatech we are working to a new GPU backend just these days.
AmanithVG is the library on which our SVG renderer: https://www.amanithsvg.com is based. All closed source as now, but things may change in future.
I will do some benchmarks of the current (and next, when the new GPU backend will be ready) version of our libraries against other libraries. Do you know if there are any standard tests (besides the classic post script Tiger)? Maybe we can all agree on a common test set for all vector graphics libs bechmarks?
That's right! I didn't consider closed source libraries when writing the list. There would be more options in that case like Direct2D and CoreGraphics. However, my opinion is that nobody should be using closed source libraries to render 2D graphics in 2024 :)
Regarding benchmarks - I think Tiger is not enough. Tiger is a great benchmark to exercise the rasterizer and stroker, but it doesn't provide enough metrics about anything else. Tt's very important how fast a 2D renderer renders small geometries, be it rectangles or paths. Because when you look at screen most stuff is actually small. That's the main reason why Blend2D benchmarking tool scales the size of geometries from 8x8 to 256x256 pixels to make sure small geometries are rendered fast and covered by benchmarks. When you explore the results you will notice how inefficient other libraries actually are when it comes to this.
> There was an attempt at making Cairo support GPU rendering, which did not work particularly well due to the library being designed around stateful operation based upon the PostScript model—resulting in a convenient and familiar API, great output quality, but hard to retarget and with some particularly slow corner cases. Meanwhile, other web engines have moved more work to the GPU, including 2D rendering, where many operations are considerably faster.
I had to make a map module for python desktop application , options were either embed full browser inside and use google maps, or do it myself. C module using cairo was MUCH faster and easier to achieve in 2013. So much faster I didn't have to implement some optimizations I planned because it was already running at about 50fps on average computers with ~1000 individually drawn markers on a map. Requiring any gpu for this was not possible anyway.
Cairo has only limited support for GPU acceleration and hasn't seen much development this decade. So users who care about performance have either switched away from Cairo entirely or are at least reducing its usage and are taking it out of the fast path.
Cairo's OpenGL support was removed, but I thought Cairo's X11 backend still has GPU acceleration for a few operations through XRender (depending on your video driver).
That's true, Cairo still provides XRender backend. Not sure it's that usable though as I think nobody really focuses on improving XRender, so it's probably in the same state as Cairo itself.
I get the point about it being developed with one primary objective - but perhaps a naive question here - in the end isn't that primary objective a shared one - to render text, lines, curves and images as fast as possible, via some sort of higher level API?
And to do so onto multiple OS & hardware backends?
Skia relies heavily on runtime shader compilation, which is slow and causes frame jank but improves peak performance. Skia also supports much older devices than Flutter does.
There’s no free lunch, Impeller has a different set of trade offs that are a better fit for Flutter.
It's the old coupling versus cohesion problem, isn't it? The tighter the coupling the more likely that abstractions leak across API boundaries. The tighter the monorepo the less likely there's a concerted effort to avoid breaking changes and consider long term API stability.
We wrote exactly that for our game-focused port of WebKit [1].
CPU renderer uses a tiny, custom fork of Skia (we only use the path rasterizer and their SSE2, AVX2, NEON backends) and our GPU renderer draws directly on GPU via tessellated paths / hardware MSAA (DX11, DX12, GL, Metal, Vulkan).
Our joke was "the recommended way to build Skia is to become a Google employee, but there are workarounds available if for some reason that isn't practical".
There's also the question of "which parts of Skia". If there are five different conceivable ways to implement something in vector graphics, Skia will implement all five, and there will be some sort of hidden obscure configuration setting that Chrome and Android will use to determine which one actually gets used. It's a very unfriendly piece of software to use, honestly.
Out of interest what's difficult about it to build? In my experience CMake isn't exactly a great developer experience, and many projects of this size take similar times to build. Is the problem specific to Skia or Google open source projects, or is it more based on the (necessary) size of the project?
The difficult parts change all the time and usually boil down to some sort of undocumented or poorly-documented dependency, especially if you're trying to enable the GPU backends. Every time someone I know tries to get it building it takes them a week to figure out how to do it.
I used to do that when I started out with C/C++...15 years ago. Now my patience is running out after 10 minutes. Tooling has become so much better, but it seems not that much with some C/C++ projects. At least the build times are better, I remember big libs would could take hours to build.
Assuming that the Skia maintainers keep that working, it might be easier to build the buildbot and use that to build Skia, than to build Skia directly!
Yes GN IMO is a much better experience than CMake. Its fully self-contained in a single binary that you can easily just distribute with your code. I have a small python script I use with it to regen files, and then to run Ninja to do the actual builds.
I've been using it for years and have no complaints.
I was skeptical of your claims about building it so I went ahead and downloaded skia and built it myself. It was simple and on my 4 year old desktop (8 cores) it took under a minute to compile skia after it had downloaded its dependencies. All I did was run 2 commands ./tools/git-sync-deps and bazel build //:skia_public. This was not painful at all.
It looks like they are several years into a migration to bazel and support both a bazel and gn based build. Bazel is usually faster. It's possible the gn based build builds everything in the repo including all tests, but the bazel one is more targeted, building less things
How long did downloading dependencies take? That's part of the build process.
As a maintainer of a project that includes another popular library from Google, here's why it's difficult to build:
- building requires downloading Googles custom toolchain and build system
- dependencies are huge, so you have lenghty download times even on fast connections
- it usually works if you are using a recent versions of OS and Python, but if you try running the same command in a year or two it might fail because they changed the requirements
- if anything fails you have to dig through multiple levels of abstractions to figure out where it failed and why
- if you want to maintain software for a few years, you'll have to keep fixing the build process because the build will suddenly stop working for unknown reasons once a year or so
It's dependent on one's internet speed so I didn't think it made much sense to time it. If it took 20 to 30 minutes to download I would have mentioned it.
Using the official instructions, it took 4 minutes to scratch download skia and all its deps and build it with ninja. It only had 1235 build actions ... that's like zero.
If this seems like a big lift, people are going to hate building Chromium.
It depends on what features you have enabled, how slow your internet connection is and obviously hardware/software config.
On my 10th gen Intel i5 laptop 8gb ram (on Windows), it takes 15 minutes to build if I don't do anything else, but if I start using Firefox and IDEs, the build times are easily in the 20-25 minutes ballpark.
I'd encourage you to look into bazel. It's really a great build system, albeit super opinionated. Should be relatively easy to build Skia as part of your project if you are using Bazel yourself.
Disclaimer: I work on Vello[0], but not on the Skia integration directly
My understanding, having not dug into it too much, is that the Skia integration does exist, but isn't enabled by default/any clients at the moment. That is, I don't know that this integration is shipping anywhere.
Vello still has some definite rough edges at the moment, so I'm not sure I'd recommend using it in a production application at the moment. We also don't have a C API, which might rule it out for some cases where you'd be considering Skia.
I recommend blend2d: https://blend2d.com/. Very well done, very fast. No GPU acceleration but it has a JIT and you can probably get further than you think without it.
Couple things I've seen in this space looking around the past couple weeks...
Flutter made a different engine called Impeller[0] which is replacing Skia. Which is a bit surprising as an ignorant outsider. I hope that works out.
Rive (https://rive.app), is a new animation tool that targets multiple platforms including web and their CEO Guido Rosso gave a great interview on School of Motion[1] about how they are building an animation first vector engine. There is a side by side demo at 46:56[2] of Skia, Impeller and Rive.
Upcoming there's also Vello[0][1] by Raph Levien, who has a bunch of experience w/ font rendering at Google. He's also working on a Rust UI library[2] and has a great talk on compute shaders[3].
I've been following Rive for a while, it's an interesting approach. I think it was a response to the fact that Lottie was tied to AdobeXD (I think?). I'm not sure if Rive's renderer is open source though.
Completely ignorant question: what's the issue with Cairo? I know it has had a hard time getting maintained but is there any article or blog talking about how it works or how it doesn't work? I've never used it directly but I'm super curious to know more !
> There was an attempt at making Cairo support GPU rendering, which did not work particularly well due to the library being designed around stateful operation based upon the PostScript model—resulting in a convenient and familiar API, great output quality, but hard to retarget and with some particularly slow corner cases. Meanwhile, other web engines have moved more work to the GPU, including 2D rendering, where many operations are considerably faster.
I was curious how Igalia was funded and this was a surprisingly good interview on how they’re structured, how they work with the major browser companies, etc… Apparently many Apple, Google, Firefox engineers left to work for them.
Since they are apparently "powerful" enough to decide major direction of WebKit development as evidenced by OP's article, what exactly is their relationship with Apple in this regard? Like who has the final say and who do the day-to-day decisions?
I'm always curious about the politics and power structure/dynamics of these major open source projects, especially the ones backed up by large companies.
To be clear WebKit is made up of multiple ports. These ports are maintained by their own groups and Igalia maintains WebKitGTK/WPE where this change is happening. It does not affect the Apple ports of WebKit.
To answer your question though. WebKit is Apple's project and they do the majority of contributions. Igalia is the second largest contributor and collaborates with Apple regularly. Within the GTK/WPE ports Igalia controls them.
Uhm, I am a bit sad that bringing Cairo up to speed isn't easier, as Cairo is an easier to use dependency than Skia.
I always find building these complex stuff from Google a huge pain - and now they have the additional idea of living on the head and not providing actual releases too...
"Impeller is a new Flutter rendering engine that the Flutter team claims solves the early-onset jank problem. It is designed as a replacement for Skia, with the goal of enabling better animations and addressing the jank issue, while also potentially providing support for 3D, which was not previously possible with Skia, as it exclusively supports 2D. Unlike Skia, Impeller compiles shaders during the build process instead of at runtime.
In Flutter 3.10, Impeller replaces Skia engine and becomes the primary rendering engine on iOS."
There's often multiple compilation steps, first compiling some application specific representation down to HLSL/GLSL/MSL source code, then compiling that down to DXIL/SPIRV/Metal bytecode, and then handing that off to the driver for compilation to native code. The first two steps at least can be done ahead of time, the bytecode is stable.
Metal and CUDA actually let you AOT compile native binaries since they have relatively few hardware targets to support, with a fallback to compiling bytecode to native at runtime for forwards compatibility.
The bytecode compilation doesn't really matter, it's still gonna jank. Maybe not as badly, but it will. The only way to get rid of jank is to cache not the bytecode and not even the native code, but the PSOs. Realistically, that is only possible on the target machine, unless you have a ginormous farm of machines representing all the permutations of hardware and drivers. That's basically what Steam does with its mass of users.
The alternative is to just not have that many shader permutations, and potentially take a performance hit from that. This seems to be the strategy that Impeller is following.
I have not seen shipping prebuilt PSOs outside of game console platforms. Typically games that care do on device caching if any. Unfortunately, lots of projects just take shader hitches as given.
Why would they? The API they (WebKit) are providing is the Canvas API, the fact that they're using Skia behind the scenes is an implementation detail... I don't see why they would go through the trouble of maintaining a stable C API for internal use in a C++ project?
This assumes that maintaining a stable C interface to Skia, and then writing the WebKit rendering code against that C interface, is easier than maintaining WebKit rendering code which calls Skia's C++ interface directly. I'm not convinced that that's the case.
At the very least, the stable interface should probably be C++, mapping between C and C++ is often non-trivial...
I think only Microsoft maintains a stable C API for their SkiaSharp. Skia only cares about google projects. I don't even know why they bother to opensource it.
Open source doesn't mean there are any obligations. It's surprising how we don't see more of this. I imagine Google wouldn't even bother with making the source available if there was an obligation attached.
One way to keep a stable API is to pin the version used...
And let's be honest, the external API doesn't change awfully fast anyway. A totally dead project would fall behind, but even a tiny amount of work would be able to keep up.
I wondered about the license because they had already problems with LibWebTRC which uses BoringSSL (BSD-License). Skia seems to use the new BSD-License without the advertising clause and is therefore compatible with the GPL.
PS: As valleyer mentions - this affects WebKitGtk not all WebKit ports? At least Skia is usable on MacOS/iOS.
I think they try to unify their rendering workflow, just like chromium did when they forked off blink and switched to skia.
In WebKit, however, that implies that they refactor the Bridge API that is used in between contexts and processes which was an internal API before and broke away often.
So I'd guess they start to do incremental changes on the Web API implementations first before they break too much anywhere else (e.g. sidebars, UIs, widgets, devtools are rendered differently but rely on the very same Bridge API)
Could someone who knows Skia internals describe, if possible, the tradeoffs that Skia has made between performance, rendering quality and API DX?
Presumably this is partly why it has become so popular, but as someone who's been writing mostly 2D GL/WebGPU apps for a decade I've only briefly considered a Skia as an alternative, but this is mostly out of ignorance.
I've only briefly looked at skia internals.... And it's awfully complex!
But the external API is fairly simple. Basically you queue up various drawing commands and at the last minute tell it to execute the queued commands somewhere.
The queue is 'optimized' - for example if you tell it to draw some text and later crop the text out of the image, then no CPU/GPU time will be spent drawing text that won't be visible in the output. You can also draw text small and scale it up later without getting jagged edges as you would if you'd scaled up a raster image.
That ability lets you make tile based renderers, where you queue up all the commands for say a whole webpage, but then draw only a small square of the page like a map tile, and as the user scrolls/pans, you can draw more tiles as needed.
I thought always that Safari was the fastest to render web pages on Mac/iOS. But sounds like I was wrong?
> In December 2023 we made the decision of giving Skia a try internally and see if it would be worth the effort of maintaining the project as a third party module inside WebKit. In just one month we had implemented enough features to be able to run all MotionMark tests. The results in the desktop were quite impressive, getting double the score of MotionMark global result. We still had to do more tests in embedded devices which are the actual target of WPE, but it was clear that, at least in the desktop, with this very initial implementation that was not even optimized (we kept our current architecture that is optimized for CPU rendering) we got much better results.
This post seems to be just about the GTK port of WebKit. So they’re comparing Skia vs Cairo on Linux. There’s no mention of Skia vs CoreGraphics for macOS. It doesn’t sound like that is something that is being considered.
I don't know the full story behind it, but from an outsider's point of view, any open library that pulls that kind of weight for so long should be considered a major feat of engineering.
Somewhat connected question, what is WPE? It’s something that got abstracted out of WebkitGTK, right? Is it a standalone port itself, or is it more like the framework to build a highly specialized one? I’ve seen at least in past versions that libwpe is an optional dependency of WebkitGTK, so they didn’t like abstract and then rebase it on the new component. Or is libwpe separate from WPE?
It's been a few years since I was looking into it, so this might be out of date. But WPE is targeted at kiosks and places you might want to display web content but not have a full web browser. Igalia can sell consulting services to these companies, as opposed to webkitgtk which has a small number of non-paying users. So WPE serves as a place for more active development of webkit-on-linux while not breaking webkitgtk which powers the web browser "Web" on gnome. Things from WPE tend to slowly make it into the webkitgtk build eventually. It's all maintained by the same people.
Looking at https://webkit.org/wpe/, the first design goal is the one that justifies WPE vs other webkit ports: "To provide a no-frills, straight to the point, web runtime for embedded devices."
The other goals, like standards compliant and hardware acceleration, are there to differentiate WPE from non-webkit and ancient-webkit browser engines that people might use on embedded devices.
You could describe WPE as WebKitGTK but you bring your own GTK.
GTK is sometimes inappropriate for embedded devices that have a unique display stack. Using WPE brings in minimal dependencies and can render to anything you desire.
libwpe is just used for some basic code sharing between the two, GTK is not based on WPE.
I wish WebKit GTK was more performant in general (and also in the MDN compatibility list but its another topic). It is getting better but it really is a blocker on shipping quality application with Tauri on Linux. There is a reason most apps using native webviews only target macos and windows, WebKit GfK is mostly undocumented for frontend engineers and often very much outdated / not installed on end user machines.
The performance improvements of WebKitGTK aren't small, in my personal benchmarks on like MotionMark, performance has doubled in recent years and that is before Skia.
Adding to this wishlist, if it were available on Windows then it would be an option for a cross-platform webview widget, but it hasn’t supported it for a while now.
Please have a look at this one: https://github.com/thorvg/thorvg.
ThorVG is portable and super lightweight, and it's currently undergoing development with WebGPU.
Particularly with Blink being entirely reliant on Chromium for multiprocess, whereas WebKit handles multiprocess itself. This was actually one of the issues that spurred Blink’s forking and entails significantly different architectures.
And there was work in the Chromium PDF library, pdfium, to switch to SKIA from AGG. It was going slowly for many years and wasn't finished last time I looked about three years ago.
All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?
Note that skia is only ever used on fuchsia for flutter. It shares that dependency on skia for all platforms it targets (which haven't otherwise been migrated to impeller).
> but none of them met all our requirements, so we decided to try writing our own library.
Question requirements.
You can always find a reason to build rather than buy, but can you reframe your requirements in such a way that you can get away with something off the shelf and then rather spend your resources on the things that you can do uniquely different for your application.
Maybe try to build your own Browser and try to keep up with upstream that has daily changes in the hundreds of commits...and then realize how elitarian that answer was?
They are a software product fork used by billions, with a team that doesn't get paid to develop on it, with not enough funding to just "buy" a battle tested library which has zero problems; because any bug would literally potentially break the web for years.
I mean I really don't know what library they could've tried other than Skia that's suitable to build a GPU renderer for Canvas's API, and they explain why they were hesitant to try Skia. Do you have any suggestions?
Oh I read the article with the assumption that this was 2D canvas rendering, but upon re-reading it I see that it's probably about normal web content. WebRender would've been a terrible fit for Canvas, but it would certainly make sense if this is about rendering normal web content.
I wish there was a nice and small vector graphics library with GPU acceleration. So far Skia is the only real option, despite its downsides.