Abstract base driver class for drawing pixel data
- Raster8 (Raster)
8-bit colour interface
- Raster16 (Raster)
16-bit colour interface
- ILI9325C (Raster16)
16-bit colour driver for ILI9325. (subclassed by 'local' drivers)
- ILI9325C_Pins (ILI9325C)
example driver variant using digitalWrite to set pins (slow, but general)
- ILI9325C_Leo (ILI9325C)
example driver variant using direct port access (fast, but Leonardo-specific)
abstract base class for ST7735 24-bit LCD driver
- ST7735_SPI (Raster16, SPIDevice, ST7735)
driver variant using 16-bit colour data and SPI (subclassed by 'local' drivers)
Vector drawing primitives
- RasterDraw16 (RasterDraw)
16-bit optimized versions of drawing primitives
Font drawing primitives
- RasterFont16 (RasterFont)
16-bit optimized versions of font primitives
Why on earth would I rewrite the GFX Classes? That's a major question. And one that needs an answer, because I've essentially forked the effort. There needs to be a good reason.
- One reason is "244 bytes", which is the difference in compiled size between Raster and GFX code paths in my "raster_test" demo. Raster is smaller, (of course) and 240 bytes is not a trivial amount.
- Another reason is "1,280 bytes", which is the size of the GFX font data block. You can't change, or cut down this font without messing with the library files and affecting all you projects. Very few projects use the entire font. Raster lets you use custom fonts.
- Another is "200%", which is about how much faster the font-rendering code in Raster is compared with GFX. (Although GFX beats mine for rectangle/screen clearing)
- But the primary reason is this: to make driver writing easier, while optimizing performance and code size.
One big thing to remember about Arduino development: less methods == less space. Functions, especially virtual methods on objects, are not cheap. Let's look at the core drawing methods required to implement a new GFX 'hardware driver':
- fillScreen(uint16_t color)
- drawPixel(int16_t x, int16_t y, uint16_t color)
- drawFastVLine(int16_t x, int16_t y, int16_t h, uint16_t color)
- drawFastHLine(int16_t x, int16_t y, int16_t w, uint16_t color)
- fillRect(int16_t x, int16_t y, int16_t w, int16_t h, uint16_t color)
The way this is done is by extending the GFX base class into a Device Driver class and overriding these methods. Several screen drivers are subclassed in this way:
| | | | |
ST7735 TFTLCD PCD8544 SD1331 HX8340B
Once those methods are overloaded, they are used by all the other GFX 'drawing primitives' like drawTriangle() and fillCircle(). The font rendering code in particular makes a lot of drawPixel() calls.
What's wrong with that? Let's say you want to add a new method like 'draw_bezier'. Where do you add it? If you subclass GFX and add it there, none of the 'driver' classes will inherit the behaviour. You would have to modify the original Adafruit source, or extend every driver class.
If we want to be able to extend GFX, then it can't be an ancestor class of the drivers. Also, it would be really nice to cut down the number of virtual methods to the absolute minimum. One would be ideal. Less code means less bugs.
This is where we separate the concepts of a "Raster Device", and a "Rasterizer". The device must be persistent, since it relates to physical hardware. But the chunks of code that draw pretty pictures to the device (whether lines or curves or fonts) should be able to come and go independently - even be applied to multiple devices.
It's also probably a good idea to acknowledge that some raster devices are 8 bit, and others are 16 bit. (And leave room for other depths) Sure it would be nice to abstract that, but the hit to performance is pretty intense. Abstraction is the enemy of performance. It's always faster when you know what you're doing.
While we're at it, we might as well split the 'drawing' and 'font' code so they're more granular, similar to the HTML5 Canvas concept of 'contexts'. Maybe there will be a 3D rasterizer some day...
Raster RasterDraw RasterFont
| | |
----------- ===== RasterDraw16 ==== RasterFont16
| | // | |
Raster8 Raster16 <=== --------- ---------
| | | | | | |
--------- (new primitives) (user fonts)
| | |
The trade-off is that we now have to instantiate extra objects and connect them together - first the Raster device, then that device gets wrapped in the 'Draw' and 'Font' objects. One advantage that gives; if memory gets really, really tight, we can tear-down the 'draw' objects (and re-instantiate them later) and leave the device driver loaded.
Raster Drivers only need to implement one virtual method to draw pixels:
- fragment(word x, word y, byte dir, Page * pixels, word index, word count)
This method draws a 'strip' of pixels to the surface, starting at a position, and extending in one of the four cardinal directions (up, right, down, left) for a specific count of pixels. The raw pixel data is read from a virtual memory page, and assumed to be in the correct format for the device.
Most LCD driver chips implement the equivalent of this function in hardware, because they're good about rotations, and avoid having a preferred orientation. So we're mapping directly to hardware capabilities, not human abstractions.
It should be pretty clear that this one method can draw single pixels, short line segments, and therefore everything else. This interacts with my virtual memory classes - the page of data might be a WordPage which repeats the same value for the entire block. (creating flat colour) Or it may be a MemoryPage array of pixel values corresponding to part of a font character or bitmap image.
The 'direction' parameter means it is equivalent to the drawFastVLine() and drawFastHLine() methods as well as drawPixel(), so we've got three methods for the price of one.
The only ones left are fillScreen() and fillRect(), which are trivially implemented by calling fragment() once per row. And that code is logically shifted over to the RasterDraw class, to be extended with all kinds of new ways of drawing rectangles with frilly edges.
So, people writing hardware drivers only need to implement one abstract method. (ok, the colour lookup makes two...) And once that method is tested and optimal, any separately-written custom rendering code can wrap around that new driver. Driver and draw classes don't have to be descended from each other, or even know about each others' existence. (Important if you are creating local subclasses)
I'm not sure I can make it any more optimal than that.
It's not totally done, of course. RasterDraw doesn't do circles or arbitrary lines yet, simply because I haven't had a need, or the time. (It would be trivial to port Bresenhams algorithms, but I wonder if a bezier curve algorithm might not be more generalized...) But the interface 'contract' is in place, so that doesn't have to be my job anymore.
Share and Enjoy.
... Afterward On Colour DepthArduino code doesn't have a concept of 'Variant', and we're not going to create a generalized 'Color' class because performance will drop through the floor if we have to create and destroy an object per pixel.
The bit depth is largely an issue of how the rest of the code stores and presents colour data. Even if a 16-bit colour screen was suddenly attached, the program can't invent more depth for 8-bit images in storage, it can only convert the pixel format. That could be done by creating a 'virtual' Raster device that has a fragment() method that converts the data before passing it on to a 'real' driver. Such 'adaptor drivers' (actually done mostly using an 'adaptor Page' for the raw data conversion) are pretty easy to make, again because only one method needs to be proxied.
In fact, there can be advantages to internally 'pretending' everything is only 8-bit (or mono) right until it gets to the driver (which might be 24 or 32 bits) - reduced storage and buffer costs mostly, even if we don't get a speed boost because of conversion costs.