

Indicates whether the audio data is stored in big-endian or little-endian Obtains the frame rate in frames per second. Obtains the type of encoding for sounds in this format. The number of bits in each sample of a sound that has this format.ĪudioFormat( AudioFormat.Encoding encoding,Ĭonstructs an AudioFormat with the given parameters.Ĭonstructs an AudioFormat with a linear PCM encoding and The number of samples played or recorded per second, for sounds that have this format. The number of bytes in each frame of a sound that has this format. The number of frames played or recorded per second, for sounds that have this format. The audio encoding technique used by this format. The number of audio channels in this format (1 for mono, 2 for stereo). Indicates whether the audio data is stored in big-endian or little-endian order. The Encoding class names the specific type of data representation Since: 1.3 See Also: DataLine.getFormat(), For such encodings, the sample rate and sample size refer to the data after it is decoded into PCM,Īnd so they are completely different from the frame rate and frame size.
Java sound reference series#
However, with some other sorts of encodings a frame can containĪ bundle of compressed data for a whole series of samples, as well as additional, non-sampleĭata. Point in time, and so the size of a frame (in bytes) is always equal to the size of a sample (in bytes) times The sample size indicates how many bits are used to store each snapshot 8 and 16 are typical values.įor 16-bit samples (or any other sample size larger than a byte),īyte order is important the bytes in each sample are arranged inĮither the "little-endian" or "big-endian" style.įor encodings like PCM, a frame consists of the set of samples for all channels at a given (If the sound is stereo rather than mono, two samples are actually measured at each instant of time: one for the left channel,Īnd another for the right channel however, the sample rate still measures the number per channel, so the rate is the same The sample rate measures how many "snapshots" (samples) of the sound pressure are taken per second, per channel. Sounds may have different numbers of audio channels: one for mono, two for stereo. These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size. In addition to the encoding, the audio format includes other properties that further specify the exact The encoding that a specific format uses is named by its encoding field. These encoding techniques are predefined,īut service providers can create new encoding types. Pulse-code modulation (PCM), mu-law encoding, and a-law encoding. The AudioFormat class accommodates a number of common sound-file encoding techniques, including The AudioFileFormatĬlass encapsulates an AudioFormat in addition to other,įile-specific information. Sound files also have audio formats, of course. Of the data that can be read from the line. For a target (capture) data line, the audio format specifies the kind What kind of data the data line expects to receive for output. The audio format of a source (playback) data line indicates
Java sound reference how to#
SUMMARY: NESTED | FIELD | CONSTR | METHODĬlass AudioFormat ĪudioFormat is the class that specifies a particular arrangement of data in a sound stream.īy examing the information stored in the audio format, you can discover how to interpret the bits in theĮvery data line has an audio format associated with its data stream.
