Compartir a través de


IAudioClient::Initialize

Previous Next

IAudioClient::Initialize

The Initialize method initializes the audio stream.

HRESULT Initialize(
  AUDCLNT_SHAREMODE  ShareMode,
  DWORD  StreamFlags,
  REFERENCE_TIME  hnsBufferDuration,
  REFERENCE_TIME  hnsPeriodicity,
  const WAVEFORMATEX  *pFormat,
  LPCGUID  AudioSessionGuid
);

Parameters

ShareMode

[in]  The sharing mode for the connection. Through this parameter, the client tells the audio engine whether it wants to share the audio endpoint device with other clients. The client should set this parameter to one of the following AUDCLNT_SHAREMODE enumeration values:

AUDCLNT_SHAREMODE_EXCLUSIVE

AUDCLNT_SHAREMODE_SHARED

StreamFlags

[in]  Flags to control creation of the stream. The client should set this parameter to 0 or to the bitwise OR of one or more of the following AUDCLNT_STREAMFLAGS_XXX constants:

AUDCLNT_STREAMFLAGS_CROSSPROCESS

AUDCLNT_STREAMFLAGS_LOOPBACK

AUDCLNT_STREAMFLAGS_EVENTCALLBACK

AUDCLNT_STREAMFLAGS_NOPERSIST

hnsBufferDuration

[in]  The buffer capacity as a time value. This parameter is of type REFERENCE_TIME and is expressed in 100-nanosecond units. This parameter contains the buffer size that the caller requests for the buffer that the audio application will share with the audio engine (in shared mode) or with the endpoint device (in exclusive mode). If the call succeeds, the method allocates a buffer that is a least this large. For more information about REFERENCE_TIME, see the Windows SDK documentation. For more information about buffering requirements, see Remarks.

hnsPeriodicity

[in]  The device period. This parameter can be nonzero only in exclusive mode. In shared mode, always set this parameter to 0. In exclusive mode, this parameter specifies the requested scheduling period for successive buffer accesses by the audio endpoint device. If the requested device period lies outside the range that is set by the device's minimum period and the system's maximum period, then the method clamps the period to that range. If this parameter is 0, the method sets the device period to its default value. To obtain the default device period, call the IAudioClient::GetDevicePeriod method. If the AUDCLNT_STREAMFLAGS_EVENTCALLBACK flag is set, then hnsPeriodicity must be nonzero and equal to hnsBufferDuration.

pFormat

[in]  Pointer to a format descriptor. This parameter must point to a valid format descriptor of type WAVEFORMATEX (or WAVEFORMATEXTENSIBLE). For more information, see Remarks.

AudioSessionGuid

[in]  Pointer to a session GUID. This parameter points to a GUID value that identifies the audio session that the stream belongs to. If the GUID identifies a session that has been previously opened, the method adds the stream to that session. If the GUID does not identify an existing session, the method opens a new session and adds the stream to that session. The stream remains a member of the same session for its lifetime. Setting this parameter to NULL is equivalent to passing a pointer to a GUID_NULL value.

Return Value

If the method succeeds, it returns S_OK. If it fails, possible return codes include, but are not limited to, the values shown in the following table.

Return code Description
AUDCLNT_E_ALREADY_INITIALIZED The IAudioClient object is already initialized.
AUDCLNT_E_WRONG_ENDPOINT_TYPE The AUDCLNT_STREAMFLAGS_LOOPBACK flag is set but the endpoint device is a capture device, not a rendering device.
AUDCLNT_E_DEVICE_INVALIDATED The audio endpoint device has been unplugged, or the audio hardware or associated hardware resources have been reconfigured, disabled, removed, or otherwise made unavailable for use.
AUDCLNT_E_DEVICE_IN_USE The endpoint device is already in use. Either the device is being used in exclusive mode, or the device is being used in shared mode and the caller asked to use the device in exclusive mode.
AUDCLNT_E_UNSUPPORTED_FORMAT The audio engine (shared mode) or audio endpoint device (exclusive mode) does not support the specified format.
AUDCLNT_E_EXCLUSIVE_MODE_NOT_ALLOWED The caller is requesting exclusive-mode use of the endpoint device, but the user has disabled exclusive-mode use of the device.
AUDCLNT_E_BUFDURATION_PERIOD_NOT_EQUAL The AUDCLNT_STREAMFLAGS_EVENTCALLBACK flag is set but parameters hnsBufferDuration and hnsPeriodicity are not equal.
AUDCLNT_E_SERVICE_NOT_RUNNING The Windows audio service is not running.
E_POINTER Parameter pFormat is NULL.
E_INVALIDARG Parameter pFormat points to an invalid format description; or the AUDCLNT_STREAMFLAGS_LOOPBACK flag is set but ShareMode is not equal to AUDCLNT_SHAREMODE_SHARED; or the AUDCLNT_STREAMFLAGS_CROSSPROCESS flag is set but ShareMode is equal to AUDCLNT_SHAREMODE_EXCLUSIVE.
E_OUTOFMEMORY Out of memory.

Remarks

After activating an IAudioClient interface on an audio endpoint device, the client must successfully call Initialize once and only once to initialize the audio stream between the client and the device. The client can either connect directly to the audio hardware (exclusive mode) or indirectly through the audio engine (shared mode). In the Initialize call, the client specifies the audio data format, the buffer size, and audio session for the stream.

An attempt to create a shared-mode stream can succeed only if the audio device is already operating in shared mode or the device is currently unused. An attempt to create a shared-mode stream fails if the device is already operating in exclusive mode.

Whether an attempt to create an exclusive-mode stream succeeds depends on several factors, including the availability of the device and the user-controlled settings that govern exclusive-mode operation of the device. For more information, see Exclusive-Mode Streams.

An IAudioClient object supports exactly one connection to the audio engine or audio hardware. This connection lasts for the lifetime of the IAudioClient object.

The client should call the following methods only after calling Initialize:

The following methods do not require that Initialize be called first:

These methods can be called any time after activating the IAudioClient interface.

Before calling Initialize to set up a shared-mode or exclusive-mode connection, the client can call the IAudioClient::IsFormatSupported method to discover whether the audio engine or audio endpoint device supports a particular format in that mode. Before opening a shared-mode connection, the client can obtain the audio engine's mix format by calling the IAudioClient::GetMixFormat method.

The endpoint buffer that is shared between the client and audio engine must be large enough to prevent glitches from occurring in the audio stream between processing passes by the client and audio engine. For a rendering endpoint, the client thread periodically writes data to the buffer, and the audio engine thread periodically reads data from the buffer. For a capture endpoint, the engine thread periodically writes to the buffer, and the client thread periodically reads from the buffer. In either case, if the periods of the client thread and engine thread are not equal, the buffer must be large enough to accommodate the longer of the two periods without allowing glitches to occur.

The client specifies a buffer size through the hnsBufferDuration parameter. The client is responsible for requesting a buffer that is large enough to ensure that glitches cannot occur between the periodic processing passes that it performs on the buffer. Similarly, the Initialize method ensures that the buffer is never smaller than the minimum buffer size needed to ensure that glitches do not occur between the periodic processing passes that the engine thread performs on the buffer. If the client requests a buffer size that is smaller than the audio engine's minimum required buffer size, the method sets the buffer size to this minimum buffer size rather than to the buffer size requested by the client.

If the client requests a buffer size (through the hnsBufferDuration parameter) that is not an integral number of audio frames, the method rounds up the requested buffer size to the next integral number of frames.

Following the Initialize call, the client should call the IAudioClient::GetBufferSize method to get the precise size of the endpoint buffer. During each processing pass, the client will need the actual buffer size to calculate how much data to transfer to or from the buffer. The client calls the IAudioClient::GetCurrentPadding method to determine how much of the data in the buffer is currently available for processing.

To achieve the minimum stream latency between the client application and audio endpoint device, the client thread should run at the same period as the audio engine thread. The period of the engine thread is fixed and cannot be controlled by the client. Making the client's period smaller than the engine's period unnecessarily increases the client thread's load on the processor without improving latency or decreasing the buffer size. To determine the period of the engine thread, the client can call the IAudioClient::GetDevicePeriod method. To set the buffer to the minimum size required by the engine thread, the client should call Initialize with the hnsBufferDuration parameter set to 0. Following the Initialize call, the client can get the size of the resulting buffer by calling IAudioClient::GetBufferSize.

A client has the option of requesting a buffer size that is larger than what is strictly necessary to make timing glitches rare or nonexistent. Increasing the buffer size does not necessarily increase the stream latency. For a rendering stream, the latency through the buffer is determined solely by the separation between the client's write pointer and the engine's read pointer. For a capture stream, the latency through the buffer is determined solely by the separation between the engine's write pointer and the client's read pointer.

The loopback flag (AUDCLNT_STREAMFLAGS_LOOPBACK) enables audio loopback. A client can enable audio loopback only on a rendering endpoint with a shared-mode stream. Audio loopback is provided primarily to support acoustic echo cancellation (AEC).

An AEC client requires both a rendering endpoint and the ability to capture the output stream from the audio engine. The engine's output stream is the global mix that the audio device plays through the speakers. If audio loopback is enabled, a client can open a capture buffer for the global audio mix by calling the IAudioClient::GetService method to obtain an IAudioCaptureClient interface on the rendering stream object. If audio loopback is not enabled, then an attempt to open a capture buffer on a rendering stream will fail. The loopback data in the capture buffer is in the device format, which the client can obtain by calling the IAudioClient::GetMixFormat method.

For more information about audio loopback, see Loopback Recording.

The AUDCLNT_STREAMFLAGS_EVENTCALLBACK flag indicates that processing of the audio buffer by the client will be event driven. WASAPI supports event-driven buffering to enable low-latency processing of both shared-mode and exclusive-mode streams.

The initial release of Windows Vista supports event-driven buffering (that is, the use of the AUDCLNT_STREAMFLAGS_EVENTCALLBACK flag) for rendering streams only.

To enable event-driven buffering, the client must provide an event handle to the system. Following the Initialize call and before calling the IAudioClient::Start method to start the stream, the client must call the IAudioClient::SetEventHandle method to set the event handle. While the stream is running, the system periodically signals the event to indicate to the client that audio data is available for processing. Between processing passes, the client thread waits on the event handle by calling a synchronization function such as WaitForSingleObject. For more information about synchronization functions, see the Windows SDK documentation.

For a shared-mode stream that uses event-driven buffering, the caller must set both hnsPeriodicity and hnsBufferDuration to 0. The Initialize method determines how large a buffer to allocate based on the scheduling period of the audio engine. Although the client's buffer processing thread is event driven, the basic buffer management process, as described previously, is unaltered. Each time the thread awakens, it should call IAudioClient::GetCurrentPadding to determine how much data to write to a rendering buffer or read from a capture buffer. In contrast to the two buffers that the Initialize method allocates for an exclusive-mode stream that uses event-driven buffering, a shared-mode stream requires a single buffer.

For an exclusive-mode stream that uses event-driven buffering, the caller must specify nonzero values for hnsPeriodicity and hnsBufferDuration, and the values of these two parameters must be equal. The Initialize method allocates two buffers for the stream. Each buffer is equal in duration to the value of the hnsBufferDuration parameter. Following the Initialize call for a rendering stream, the caller should fill the first of the two buffers before starting the stream. For a capture stream, the buffers are initially empty, and the caller should assume that each buffer remains empty until the event for that buffer is signaled. While the stream is running, the system alternately sends one buffer or the other to the client—this form of double buffering is referred to as "ping-ponging". Each time the client receives a buffer from the system (which the system indicates by signaling the event), the client must process the entire buffer. For example, if the client requests a packet size from the IAudioRenderClient::GetBuffer method that does not match the buffer size, the method fails. Calls to the IAudioClient::GetCurrentPadding method are unnecessary because the packet size must always equal the buffer size. In contrast to the buffering modes discussed previously, the latency for an event-driven, exclusive-mode stream depends directly on the buffer size.

As explained in Audio Sessions, the default behavior for a session that contains rendering streams is that its volume and mute settings persist across system restarts. The AUDCLNT_STREAMFLAGS_NOPERSIST flag overrides the default behavior and makes the settings nonpersistent. This flag has no effect on sessions that contain capture streams—the settings for those sessions are never persistent. In addition, the settings for a session that contains a loopback stream (a stream that is initialized with the AUDCLNT_STREAMFLAGS_LOOPBACK flag) are nonpersistent.

Only a session that connects to a rendering endpoint device can have persistent volume and mute settings. The first stream to be added to the session determines whether the session's settings are persistent. Thus, if the AUDCLNT_STREAMFLAGS_NOPERSIST or AUDCLNT_STREAMFLAGS_LOOPBACK flag is set during initialization of the first stream, the session's settings are not persistent. Otherwise, they are persistent. Their persistence is unaffected by additional streams that might be subsequently added or removed during the lifetime of the session object.

After a call to Initialize has successfully initialized an IAudioClient interface instance, a subsequent Initialize call to initialize the same interface instance will fail and return error code E_ALREADY_INITIALIZED.

If the initial call to Initialize fails, subsequent Initialize calls might fail and return error code E_ALREADY_INITIALIZED, even though the interface has not been initialized. If this occurs, release the IAudioClient interface and obtain a new IAudioClient interface from the MMDevice API before calling Initialize again.

For code examples that call the Initialize method, see the following topics:

Requirements

Client: Windows Vista

Header: Include Audioclient.h.

See Also

Previous Next