that are mapped to it and automatically invalidate them when it is freed
- refcount is kept so that an external application can still create a reference
to SDL_Surface.
- lock_data was un-used and is now renamed and used as a list keep track of the blitmap
- Regression of test_1.c of bug 3827, after fix from bug 4798.
- Blending is also needed when the palette contains alpha value, but not necessarily colorkey.
- Clean up SDL_ConvertColorkeyToAlpha which doesn't seem to need 'ignore_alpha' parameter any-more.
(see bug 3827)
Konrad
This kind of blending is rather quite useful and in my opinion should be available for all renderers. I do need it myself, but since I didn't want to use a custom blending mode which is supported only by certain renderers (e.g. not in software which is quite important for me) I did write implementation of SDL_BLENDMODE_MUL for all renderers altogether.
SDL_BLENDMODE_MUL implements following equation:
dstRGB = (srcRGB * dstRGB) + (dstRGB * (1-srcA))
dstA = (srcA * dstA) + (dstA * (1-srcA))
Background:
https://i.imgur.com/UsYhydP.png
Blended texture:
https://i.imgur.com/0juXQcV.png
Result for SDL_BLENDMODE_MOD:
https://i.imgur.com/wgNSgUl.png
Result for SDL_BLENDMODE_MUL:
https://i.imgur.com/Veokzim.png
I think I did cover all possibilities within included patch, but I didn't write any tests for SDL_BLENDMODE_MUL, so it would be lovely if someone could do it.
Sylvain
Seems to be a regression in this commit: https://hg.libsdl.org/SDL/rev/7fdbffd47c0e
SDL_CalculatePitch() was using format->BytesPerPixel, now it uses SDL_BYTESPERPIXEL().
The underlying issue is that "surface->format->BytesPerPixel" is *not* always the same as SDL_BYTESPERPIXEL(format);
BytesPerPixel defined as format->BytesPerPixel = (bpp + 7) / 8;
vs
#define SDL_BYTESPERPIXEL(format) ... (format & 0xff)
Because of SDL_pixels.h format definitions, one is giving a BytesPP 1, the other 0.
Surfaces are allocated using SDL_SIMDAlloc()
They are marked with SDL_SIMD_ALIGNED flag to appropriatly free them with SDL_SIMDFree()
(Flag is cleared when pixels is free'd in RLE, in case user would hijack the pixels ptr)
When providing its own memory pointer (SDL_CreateRGBSurfaceFrom()) and clearing
SDL_PREALLOC to delegate to SDL the memory free, it's the responsability of the user
to add SDL_SIMD_ALIGNED or not, whether the pointer has been allocated with SDL_malloc() or
SDL_SIMDAlloc().
Some blit combination are not supported (eg ARGB8888 -> SDL_PIXELFORMAT_INDEX1MSB)
So prevent SDL_ConvertSurface from creating a broken surface, which cannot be blitted
Sylvain
Patch a few warnings when using:
-Wmissing-prototypes -Wdocumentation -Wdocumentation-unknown-command
They are automatically enabled with -Wall
Anthony @ POW Games
SDL_CreateTextureFromSurface makes an internal call to SDL_GetColorKey which can return an error and spams the error log with "Surface doesn't have a colorkey" even though the original function didn't return an error.
New functions get and set the YUV colorspace conversion mode:
SDL_SetYUVConversionMode()
SDL_GetYUVConversionMode()
SDL_GetYUVConversionModeForResolution()
SDL_ConvertPixels() converts between all supported RGB and YUV formats, with SSE acceleration for converting from planar YUV formats (YV12, NV12, etc) to common RGB/RGBA formats.
Added a new test program, testyuv, to verify correctness and speed of YUV conversion functionality.
Sylvain
There are various YUV-RGB conversion coefficients, according to https://www.fourcc.org/fccyvrgb.php
I choose the first (from Video Demystified, with integer multiplication),
but the current SDL2 Dither functions use in fact the next one, which follows a specifications called CCIR 601.
Here's a patch to use the second ones and with previous warning corrections.
There are less multiplications involved because Chroma coefficient is 1.
Also, doing float multiplication is as efficient with vectorization.
In the end, the YUV decoding is faster: ~165 ms vs my previous 195 ms.
Moreover, if SDL2 is compiled with -march=native, then YUV decoding time drops to ~130ms, while older ones remains around ~220 ms.
For information, from jpeg-9 source code:
jpeg-9/jccolor.c
* YCbCr is defined per CCIR 601-1, except that Cb and Cr are
* normalized to the range 0..MAXJSAMPLE rather than -0.5 .. 0.5.
* The conversion equations to be implemented are therefore
* Y = 0.29900 * R + 0.58700 * G + 0.11400 * B
* Cb = -0.16874 * R - 0.33126 * G + 0.50000 * B + CENTERJSAMPLE
* Cr = 0.50000 * R - 0.41869 * G - 0.08131 * B + CENTERJSAMPLE
jpeg-9/jdcolor.c
* YCbCr is defined per CCIR 601-1, except that Cb and Cr are
* normalized to the range 0..MAXJSAMPLE rather than -0.5 .. 0.5.
* The conversion equations to be implemented are therefore
*
* R = Y + 1.40200 * Cr
* G = Y - 0.34414 * Cb - 0.71414 * Cr
* B = Y + 1.77200 * Cb
Sylvain
Few issues with YUV on SDL2 when using odd dimensions, and missing conversions from/back to YUV formats.
1) The big part is that SDL_ConvertPixels() does not convert to/from YUV in most cases. This now works with any format and also with odd dimensions,
by adding two internal functions SDL_ConvertPixels_YUV_to_ARGB8888 and SDL_ConvertPixels_ARGB8888_to_YUV (could it be XRGB888 ?).
The target format is hard coded to ARGB888 (which is the default in the internal of the software renderer).
In case of different YUV conversion, it will do an intermediate conversion to a ARGB8888 buffer.
SDL_ConvertPixels_YUV_to_ARGB8888 is somehow redundant with all the "Color*Dither*Mod*".
But it allows some completeness of SDL_ConvertPixels to handle all YUV format.
It also works with odd dimensions.
Moreover, I did some benchmark(SDL_ConvertPixel vs Color32DitherYV12Mod1X and Color32DitherYUY2Mod1X).
gcc-6.3 and clang-4.0. gcc performs better than clang. And, with gcc, SDL_ConvertPixels() performs better (20%) than the two C function Color32Dither*().
For instance, to convert 10 times a 3888x2592 image, it takes ~195 ms with SDL_ConvertPixels and ~235 ms with Color32Dither*().
Especially because of gcc vectorize feature that optimises all conversion loops (-ftree-loop-vectorize).
Nb: I put no image pitch for the YUV buffers. because it complexify a little bit the code and the API :
There would be some ambiguity when setting the pitch exactly to image width:
would it a be pitch of image width (for luma and chroma). or just contiguous data ? (could set pitch=0 for the later).
2) Small issues with odd dimensions:
If width "w" is odd, luma plane width is still "w" whereas chroma planes will be "(w + 1)/2". Almost the same for odd h.
Solution is to strategically substitute "w" by "(w+1)/2" at the good places ...
- In the repository, SDL_ConvertPixels() handles YUV only if yuv source format is exactly the same as YUV destination format.
It basically does a memcpy of pixels, but it's done incorrectly when width or height is odd (wrong size of chroma planes). This is fixed.
- SDL Renderers don't support odd width/height for YUV textures.
This is fixed for software, opengl, opengles2. (opengles 1 does not support it and fallback to software rendering).
This is *not* fixed for D3D and D3D11 ... (and others, psp ?)
Only *two* Dither function are fixed ... not sure if others are really used.
- This is not possible to create a NV12/NV12 texture with the software renderer, whereas other renderers allow it.
This is fixed, by using SDL_ConvertPixels underneath.
- It was not possible to SDL_UpdateTexture() of format NV12/NV21 with the software renderer. this is fixed.
Here's also two testcases:
- that do all combination of conversion.
- to test partial UpdateTexture