Your company here, and a link to your site. Click to find out more.

pocketsphinx - Man Page

Run speech recognition on audio data


pocketsphinx [ options... ] [ live | single | help | soxflags ] INPUTS...


The ‘pocketsphinx’ command-line program reads single-channel 16-bit PCM audio one or more input files (or ‘-’ to read from standard input), and attemps to recognize speech in it using the default acoustic and language model. The input files can be raw audio, WAV, or NIST Sphere files, though some of these may not be recognized properly.  It accepts a large number of options which you probably don't care about, and a command which defaults to ‘live’. The commands are as follows:


Print a long list of those options you don't care about.


Dump configuration as JSON to standard output (can be loaded with the ‘-config’ option).


Detect speech segments in input files, run recognition on them (using those options you don't care about), and write the results to standard output in line-delimited JSON. I realize this isn't the prettiest format, but it sure beats XML. Each line contains a JSON object with these fields, which have short names to make the lines more readable:

"b": Start time in seconds, from the beginning of the stream

"d": Duration in seconds

"p": Estimated probability of the recognition result, i.e. a number between 0 and 1 which may be used as a confidence score

"t": Full text of recognition result

"w": List of segments (usually words), each of which in turn contains the ‘b’, ‘d’, ‘p’, and ‘t’ fields, for start, end, probability, and the text of the word. In the future we may also support hierarchical results in which case ‘w’ could be present.


Recognize the input as a single utterance, and write a JSON object in the same format described above.


Align a single input file (or ‘-’ for standard input) to a word sequence, and write a JSON object in the same format described above. The first positional argument is the input, and all subsequent ones are concatenated to make the text, to avoid surprises if you forget to quote it.  You are responsible for normalizing the text to remove punctuation, uppercase, centipedes, etc. For example:

    pocketsphinx align goforward.wav "go forward ten meters"

By default, only word-level alignment is done.  To get phone alignments, pass `-phone_align yes` in the flags, e.g.:

    pocketsphinx -phone_align yes align audio.wav $text

This will make not particularly readable output, but you can use jq (https://stedolan.github.io/jq/) to clean it up.  For example, you can get just the word names and start times like this:

    pocketsphinx align audio.wav $text | jq '.w[]|[.t,.b]'

Or you could get the phone names and durations like this:

    pocketsphinx -phone_align yes align audio.wav $text | jq '.w[]|.w[]|[.t,.d]'

There are many, many other possibilities, of course.


Print a usage and help text with a list of possible arguments.


Return arguments to ‘sox’ which will create the appropriate input format. Note that because the ‘sox’ command-line is slightly quirky these must always come after the filename or ‘-d’ (which tells ‘sox’ to read from the microphone). You can run live recognition like this:

    sox -d $(pocketsphinx soxflags) | pocketsphinx -

or decode from a file named "audio.mp3" like this:

    sox audio.mp3 $(pocketsphinx soxflags) | pocketsphinx -

By default only errors are printed to standard error, but if you want more information you can pass ‘-loglevel INFO’. Partial results are not printed, maybe they will be in the future, but don't hold your breath. Force-alignment is likely to be supported soon, however.



Automatic gain control for c0 ('max', 'emax', 'noise', or 'none')


Initial threshold for automatic gain control


phoneme decoding with phonetic lm (given here)


Perform phoneme decoding with phonetic lm and context-independent units only


Preemphasis parameter


Inverse of acoustic model scale for confidence score calculation


Inverse weight applied to acoustic scores.


Print results and backtraces to log.


Beam width applied to every frame in Viterbi search (smaller values mean wider beam)


Run bestpath (Dijkstra) search over word lattice (3rd pass)


Language model probability weight for bestpath search


Number of components in the input feature vector


Cepstral mean normalization scheme ('live', 'batch', or 'none')


Initial values (comma-separated) for cepstral mean when 'live' is used


Compute all senone scores in every frame (can be faster when there are many senones)


pronunciation dictionary (lexicon) input file


Dictionary is case sensitive (NOTE: case insensitivity applies to ASCII characters only)


Add 1/2-bit noise


Use double bandwidth filters (same center freq)


Frame GMM computation downsampling ratio


word pronunciation dictionary input file


Feature stream type, depends on the acoustic model


containing feature extraction parameters.


Filler word transition probability


Frame rate


format finite state grammar file


Add alternate pronunciations to FSG


Insert filler words at each state.


Run forward flat-lexicon search over word lattice (2nd pass)


Beam width applied to every frame in second-pass flat search


Minimum number of end frames for a word to be searched in fwdflat search


Language model probability weight for flat lexicon (2nd pass) decoding


Window of frames in lattice to search for successor words in fwdflat search


Beam width applied to word exits in second-pass flat search


Run forward lexicon-tree search (1st pass)


containing acoustic model files.


Endianness of input data, big or little, ignored if NIST or MS Wav


grammar file


to spot


file with keyphrases to spot, one per line


Delay to wait for best detection score


Phone loop probability for keyphrase spotting


Threshold for p(hyp)/p(alternatives) ratio


Initial backpointer table size


containing transformation matrix to be applied to features (single-stream features only)


Dimensionality of output of feature transformation (0 to use entire matrix)


Length of sin-curve for liftering, or 0 for no liftering.


trigram language model input file


a set of language model


language model in -lmctl to use by default


Base in which all log-likelihoods calculated


to write log messages in


Minimum level of log messages (DEBUG, INFO, WARN, ERROR)


Write out logspectral files instead of cepstra


Lower edge of filters


Beam width applied to last phone in words


Beam width applied to last phone in single-phone words


Language model probability weight


Maximum number of active HMMs to maintain at each frame (or -1 for no pruning)


Maximum number of distinct word exits at each frame (or -1 for no pruning)


definition input file


gaussian means input file


to log feature files to


Nodes ignored in lattice construction if they persist for fewer than N frames


mixture weights input file (uncompressed)


Senone mixture weights floor (applied to data from -mixw file)


transformation to apply to means and variances


Use memory-mapped I/O (if possible) for model files


Number of cep coefficients


Size of FFT, or 0 to set automatically (recommended)


Number of filter banks


New word transition penalty


Beam width applied to phone transitions


Phone insertion penalty


Beam width applied to phone loop search for lookahead


Beam width applied to phone loop transitions for lookahead


Phone insertion penalty for phone loop


Weight for phoneme lookahead penalties


Phoneme lookahead window size, in frames


to log raw audio files to


Remove DC offset from each frame


Remove noise using spectral subtraction


Round mel filter frequencies to DFT points


Sampling rate


Seed for random number generator; if less than zero, pick our own


dump (compressed mixture weights) input file


to log senone score files to


to codebook mapping input file (usually not needed)


Silence word transition probability


Write out cepstral-smoothed logspectral files


specification (e.g., 24,0-11/25,12-23/26-38 or 0-12/13-25/26-38)


state transition matrix input file


HMM state transition probability floor (applied to -tmat file)


Maximum number of top Gaussians to use in scoring.


Beam width used to determine top-N Gaussians (or a list, per-feature)


rule for JSGF (first public rule is default)


Which type of transform to use to calculate cepstra (legacy, dct, or htk)


Normalize mel filters to unit area


Upper edge of filters


Unigram weight


gaussian variances input file


Mixture gaussian variance floor (applied to data from -var file)


Variance normalize each utterance (only if CMN == current)


Show input filenames


defining the warping function


Warping function type (or shape)


Beam width applied to word exits


Word insertion penalty


Hamming window length


Written by numerous people at CMU from 1994 onwards.  This manual page by David Huggins-Daines <dhdaines@gmail.com>

See Also

pocketsphinx_batch(1), sphinx_fe(1).