...As opposed to once per draw call.
This change was made because multiple draw calls can be made per
frame, or a only a handful of draw calls may be made per minute.
Since draw calls are an inconsistent metric, I just switched to
frames instead.
This would occur in CSE2E's options menu.
It was caused by cute_spritebatch destroying a texture atlas that
was being used by the current unflushed vertex buffer. To solve
this, we now track what textures are being used by current buffer,
and flush the buffer when the texture are about to be
modified/deleted.
As you can guess, this issue doesn't affect the SDLTexture backend,
since its batching system is half-decent.
I'm not sure why there was linear filtering when I was rendering at
1:1 pixel ratio, but it did happen. This fixes it by forcing
nearest-neighbour. The artefacting was caused by the linear filtering
blending with pixels outside the specified texture coordinates,
creating lines around everything.
Fun fact: the framebuffer technique CSE2 uses is demanding on the Pi
(1278x720 runs at 60 FPS when the framebuffer is forced to 852x480,
even though all the internal rendering is still 1278x720). I guess
rendering those extra 920160 pixels really takes its toll.
Apparently 2 VBOs wasn't enough. This bumped the framerate from 13FPS
to 20FPS in a stress-test (CSE2E at 1704x960 on a Raspberry Pi 3B
in X11 with the KMS OpenGL driver).
This should reduce stalling when the OpenGL driver is still
processing the buffer when we're about to upload to it.
Hopefully, this is what was making the OpenGL ES 2.0 renderer so much
slower than the SDLTexture renderer on the Raspberry Pi 3B (SDL uses
*8* buffers). Unfortunately, I don't have access to it right now, so
I can't test this.
Now the SDLSurface backend survives window resizes (also triggered by
alt-tabbing while in fullscreen), and the SDLTexture backend properly
regenerates its textures after a fullscreen alt-tab in DirectX mode.