When converting audio from signed to unsigned values of vice-versa
the silence value chosen by SDL was the value of the device, not
of the stream that the data was being put into. After conversion
this would lead to a very high or low value, making the speaker
jump to a extreme positon, leading to an audible noise whenever
creating, destroying or playing scilence on a device that reqired
such conversion.
Cameron Gutman
I was trying to use SDL_GetQueuedAudioSize() to ensure my audio latency didn't get too high while streaming data in from the network. If I get more than N frames of audio queued, I know that the network is giving me more data than I can play and I need to drop some to keep latency low.
This doesn't work well on WASAPI out of the box, due to the addition of GetPendingBytes() to the amount of queued data. As a terrible hack, I loop 100 times calling SDL_Delay(10) and SDL_GetQueuedAudioSize() before I ever call SDL_QueueAudio() to get a "baseline" amount that I then subtract from SDL_GetQueuedAudioSize() later. However, because this value isn't actually a constant, this hack can cause SDL_GetQueuedAudioSize() - baselineSize to be < 0. This means I have no accurate way of determining how much data is actually queued in SDL's audio buffer queue.
The SDL_GetQueuedAudioSize() documentation says: "This is the number of bytes that have been queued for playback with SDL_QueueAudio(), but have not yet been sent to the hardware." Yet, SDL_GetQueuedAudioSize() returns > 0 value when SDL_QueueAudio() has never been called.
Based on that documentation, I believe the current behavior contradicts the documented behavior of this function and should be changed in line with Boris's patch.
I understand that exposing the IAudioClient::GetCurrentPadding() value is useful, but a solution there needs to take into account what of that data is silence inserted by SDL and what is actual data queued by the user with SDL_QueueAudio(). Until that happens, I think the best approach is to remove the GetPendingBytes() call until SDL is able to keep track of queued data to make sense of it. This would make SDL_GetQueuedAudioSize() possible to use accurately with WASAPI.
This was meant to migrate CoreAudio onto the same SDL_RunAudio() path that
most other audio drivers are on, but it introduced a bug because it doesn't
deal with dropped audio buffers...and fixing that properly just introduces
latency.
I might revisit this later, perhaps by reworking SDL_RunAudio to allow for
this sort of API better, or redesigning the whole subsystem or something, I
don't know. I'm not super-thrilled that this has to exist outside of the usual
codepaths, though.
Fixes Bugzilla #4481.
SDLActivity thread priority is unchanged, by default -10 (THREAD_PRIORITY_VIDEO).
SDLAudio thread priority was -4 (SDL_SetThreadPriority was ignored) and is now -16 (THREAD_PRIORITY_AUDIO).
SDLThread thread priority was 0 (THREAD_PRIORITY_DEFAULT) and is -4 (THREAD_PRIORITY_DISPLAY).
This means that if you have two devices named "Soundblaster Pro" in your
machine, one will be reported as "Soundblaster Pro" and the other as
"Soundblaster Pro (2)".
This makes it so you can't into a position where one of your devices can't
be opened because another is sitting on the same name.
Author: Anthony Pesch <inolen@gmail.com>
Date: Fri May 4 20:21:21 2018 -0400
Added SDL_AUDIO_ALLOW_SAMPLES_CHANGE flag enabling users of SDL_OpenAudioDevice to get
the sample size of the actual hardware buffer vs having a stream created to handle the
delta
This would cause problems in various ways, but specifically triggers an
assert when you close a WASAPI capture device in an app running over RDP.
Related to (but not the actual bug) in Bugzilla #3924.
SDL now builds with gcc 7.2 with the following command line options:
-Wall -pedantic-errors -Wno-deprecated-declarations -Wno-overlength-strings --std=c99
XAudio2 doesn't have capture support, so WASAPI was to replace it; the holdout
was WinRT, which still needed it as its primary audio target until the WASAPI
code code be made to work.
The support matrix now looks like:
WinXP: directsound by default, winmm as a fallback for buggy drivers.
Vista+: WASAPI (directsound and winmm as fallbacks for debugging).
WinRT: WASAPI
This time it's using real math from a real whitepaper instead of my previous
amateur, fast-but-low-quality attempt. The new resampler does "bandlimited
interpolation," as described here: https://ccrma.stanford.edu/~jos/resample/
The output appears to sound cleaner, especially at high frequencies, and of
course works with non-power-of-two rate conversions.
There are some obvious optimizations to be done to this still, and there is
other fallout: this doesn't resample a buffer in-place, the 2-channels-Sint16
fast path is gone because this resampler does a _lot_ of floating point math.
There is a nasty hack to make it work with SDL_AudioCVT.
It's possible these issues are solvable, but they aren't solved as of yet.
Still, I hope this effort is slouching in the right direction.
Simon Hug
This issue actually raises the question if this API change (requirement of initialized audio subsystem) is breaking backwards compatibility. I don't see the documentation saying it is needed in 2.0.5.
David Ludwig
I've created a new set of patches. I am happy to create more, if it would help.
One version only copies 'size'.
A second version copies both 'size' and 'silence'. When looking over the documentation for SDL_OpenAudio in SDL_audio.h, it mentioned that both 'size' and 'silence' were things that SDL_OpenAudio would calculate.
Regarding *both* patches, I did notice that SDL 1.2 appears to have always modified desired's size and silence fields. The SDL wiki, at https://wiki.libsdl.org/SDL_OpenAudio#Remarks , does note:
Simon Hug
Some code in SDL loads libraries with SDL_LoadObject to get more information or use newer APIs. SDL_LoadObject may fail, set an error message and SDL will continue with some fallback code. Since SDL will overwrite the error or exit the function with a return value that indicates success, the error form SDL_LoadObject for the optional stuff might as well be cleared right away.
This gracefully recovers when a device format is changed, and will switch
to the new default device if the current one is unplugged, etc.
This does not handle when a new default device is added; it only notices
if the current default goes away. That will be fixed by implementing the
stubbed-out MMNotificationClient_OnDefaultDeviceChanged() function.
We will throw away the data anyhow, but some apps depend on the callback
firing to make progress; testmultiaudio.c, if nothing else, is an example
of this.
Capture also will now fire the callback in these conditions, offering nothing
but silence.
Apps can check SDL_GetAudioDeviceStatus() or listen for the
SDL_AUDIODEVICEREMOVED event if they want to gracefully deal with
an opened audio device that has been unexpectedly lost.
This should remain binary compatible with Windows XP, as we dynamically
load anything we need and fall back to DirectSound/WinMM/XAudio2 if not
available.
Walter van Niftrik
We have found that since SDL 2.0.5 the audio callback thread is created with a very small stack size. In our application this is leading to stack overflows.
We believe there is a bug at http://hg.libsdl.org/SDL/file/391fd532f79e/src/audio/SDL_audio.c#l1132, where the is_internal_thread flag appears to be inverted.
This defaults to the internal SDL resampler, since that's the likely default
without a system-wide install of libsamplerate, but those that need more can
tweak this.
There was a draft of this where it did audio conversion into the final buffer,
if there was enough room available past what you asked for, but that interface
got removed, so the parameters didn't make sense (and we were using the
wrong one in any case, too!).
This tends to be a frequent spot where drivers hang, and the waits were
often unreliable in any case.
Instead, our audio thread now alerts the driver that we're done streaming audio
(which currently XAudio2 uses to alert the system not to warn about the
impending underflow) and then SDL_Delay()'s for a duration that's reasonable
to drain the DMA buffers before closing the device.
This tries to make SDL robust against device drivers that have hung up,
apps don't freeze in catastrophic (but not necessarily uncommon) conditions.
Now we detach the audio thread and let it clean up and don't care if it
never actually runs to completion.
James Zipperer
The problem I was seeing was that the the ALSA hotplug thread would call SDL_RemoveAudioDevice, but my application code was not seeing an SDL_AUDIODEVICEREMOVED event to go along with it. To fix it, I added some code into SDL_RemoveAudioDevice to call SDL_OpenedAudioDeviceDisconnected on the corresponding open audio device. There didn't appear to be a way to cross reference the handle that SDL_RemoveAudioDevice gets and the SDL_AudioDevice pointer that SDL_OpenedAudioDeviceDisconnected needs, so I ended up adding a void *handle field to struct SDL_AudioDevice so that I could do the cross reference.
Is there some other way beside adding a void *handle field to the struct to get the proper information for SDL_OpenedAudioDeviceDisconnected?
James Zipperer
Close the audio device before waiting for the audio thread to complete, which fixes a situation where the audio thread never completes
Add an additional check in the audio thread to see if the device is enabled and bail out if the device is no longer enabled
Otherwise, if you had a massive, one-time queue buildup, the memory from that
remains allocated until you close the device. Also, if you are just using a
reasonable amount of space, this would previously cause you to reallocate it
over and over instead of keeping a little bit of memory around.
I think this was important for SDL 1.2 because some targets needed
special device memory for DMA buffers or locked memory buffers for use in
hardware interrupts or something, but since it just defines to SDL_malloc
and SDL_free now, I took it out for clarity's sake.
- It's now always called if device->hidden isn't NULL, even if OpenDevice()
failed halfway through. This lets implementation code not have to clean up
itself on every possible failure point; just return an error and SDL will
handle it for you.
- Implementations can assume this->hidden != NULL and not check for it.
- implementations don't have to set this->hidden = NULL when done, because
the caller is always about to free(this).
- Don't reset other fields that are in a block of memory about to be free()'d.
- Implementations all now free things like internal mix buffers last, after
closing devices and such, to guarantee they definitely aren't in use anymore
at the point of deallocation.
This allows us to set an explicit stack size (overriding the system default
and the global hint an app might have set), and remove all the macro salsa
for dealing with _beginthreadex and such, as internal threads always set those
to NULL anyhow.
I've taken some guesses on reasonable (and tiny!) stack sizes for our
internal threads, but some of these might turn out to be too small in
practice and need an increase. Most of them are simple functions, though.
The internal function SDL_EGL_LoadLibrary() did not delete and remove a mostly
uninitialized data structure if loading the library first failed. A later try to
use EGL then skipped initialization and assumed it was previously successful
because the data structure now already existed. This led to at least one crash
in the internal function SDL_EGL_ChooseConfig() because a NULL pointer was
dereferenced to make a call to eglBindAPI().