One place known to differ in a significant way is a single line segment that
starts and ends on the same point; the GL renderers will light up a single
pixel here, whereas the software renderer will not. My current belief is this
is a bug in the software renderer, based on the wording of the docs:
"SDL_RenderDrawLine() draws the line to include both end points."
You can see an example program that triggers that difference in Bug #2006.
As it stands, the GL renderers might _also_ render diagonal lines differently,
as the the Bresenham step might vary between implementations (one does three
pixels and then two, the other does two and then three, etc). But this patch
causes those lines to start and end on the correct pixel, and that's the best
we can do, and all anyone really needs here.
Not closing any bugs with this patch (yet!), but here are several that it
appears to fix. If no other corner cases pop up, we'll call this done.
Reference Bug #2006.
Reference Bug #1626.
Reference Bug #4001.
...and probably others...
Currently, if an application wants to toggle VSync, they'd have to tear
down the renderer and recreate it. This patch fixes that by letting
applications call SDL_RenderSetVSync().
This is the same as the patch in #3673, except it applies to all
renderers (including PSP, even thought it seems that the VSync flag is
disabled for that renderer). Furthermore, the renderer flags also change
as well, which #3673 didn't do. It is also an API instead of using hint
callbacks (which could be potentially dangerous).
Closes#3673.
This is needed to support CHERI, and thus Arm's experimental Morello
prototype, where pointers are implemented using unforgeable capabilities
that include bounds and permissions metadata to provide fine-grained
spatial and referential memory safety, as well as revocation by sweeping
memory to provide heap temporal memory safety.
On most systems (anything with a flat memory hierarchy rather than using
segment-based addressing), size_t and uintptr_t are the same type.
However, on CHERI, size_t is just an integer offset, whereas uintptr_t
is still a capability as described above. Casting a pointer to size_t
will strip the metadata and validity tag, and casting from size_t to a
pointer will result in a null-derived capability whose validity tag is
not set, and thus cannot be dereferenced without faulting.
The audio and cursor casts were harmless as they intend to stuff an
integer into a pointer, but using uintptr_t is the idiomatic way to do
that and silences our compiler warnings (which our build tool makes
fatal by default as they often indicate real problems). The iconv and
egl casts were true positives as SDL_iconv_t and iconv_t are pointer
types, as is NativeDisplayType on most OSes, so this would have trapped
at run time when using the round-tripped pointers. The gles2 casts were
also harmless; the OpenGL API defines this argument to be a pointer type
(and uses the argument name "pointer"), but it in fact represents an
integer offset, so like audio and cursor the additional idiomatic cast
is needed to silence the warning.
OpenGL leaves the final line segment open, SDL's software renderer does not,
so we need a tiny bit of trigonometry here to move one more pixel in the right
direction.
This is the OpenGL line drawing fix for Bugzilla #3182, but there's some
disagreement about what the renderers should do here, so I'm backing this out
until after 2.0.12 ships, and then we'll reevaluate all the renderer backends
to decide what's correct, and make them all work the same.
Likewise for the GLES1 and GLES2 renderers.
This solves the missing pixel at the end of a line and removes all the
heuristics for various platforms/drivers. It's possible we could still use
GL_LINE_STRIP with this and save some vertex buffer space, assuming this
doesn't upset some driver somewhere, but this seems to be a clean fix that
makes the GL renderers match the software renderer output.
Diamond-exit rule explanation:
http://graphics-software-engineer.blogspot.com/2012/04/rasterization-rules.html
Fixes Bugzilla #3182.
Konrad
This was something rather trivial to add, but asked at least several times before (I did google about it as well).
It should be possible to dynamically change scaling mode of the texture. It is actually trivial task, but until now it was only possible with a hint before creating a texture.
I needed it for my game as well, so I took the liberty of writing it myself.
This patch adds following functions:
SDL_SetTextureScaleMode(SDL_Texture * texture, SDL_ScaleMode scaleMode);
SDL_GetTextureScaleMode(SDL_Texture * texture, SDL_ScaleMode *scaleMode);
That way you can change texture scaling on the fly.
warning: either cast from 'int' to 'size_t' (aka 'unsigned long') is ineffective, or there is loss of precision before the conversion [bugprone-misplaced-widening-cast]
https://bugzilla.libsdl.org/show_bug.cgi?id=4350
We can't safely call GL_DestroyRenderer() until GL_LoadFunctions()
succeeds because we will be missing functions that we try to use
when activating the renderer for destruction if we have an GL context.