The basic task of a speech recogniser is to find the most probable sequence of words W which would sound like acoustic data A when spoken. If P(W|A) is the probability that the words W were spoken as A, the speech recogniser should select the most likely word sequence Ŵ given by
Ŵ =
argmax
^{W}P(W|A)
Bayes' formula can be used to substitute more readily estimated probabilities: P(W), the probability that W is uttered, and P(A|W), the probability that W sounds like A, giving
Ŵ =
argmax
^{W}P(W)P(A|W)
P(A)
P(A), the average probability of the acoustic data, is constant, so it can be removed from the maximisation:
Ŵ =
argmax
^{W}P(W)P(A|W)
This formula succinctly expresses the contribution of the two types of statistical model used in speech recognition. An acoustic model is used to estimate P(A|W), and a language model is used to estimate
For a sequence of n words, W = w_{1}, w_{2},...,w_{n}, the probability of the sequence can be expressed by the chain rule as the product of the probabilities of each successive word:-
where P(w_{i}|w_{1},...,w_{i-1}) is the probability of word w_{i} given all the preceding words w_{1},...,w_{i-1}. The history h_{i} = w_{1},...,w_{i-1} starts with a length of zero and increases by one for each successive word. In practice, the number of possible histories would be far too great for this to be a feasible way of estimating word probabilities. The number of possibilities has to be reduced by defining a practical number of equivalence classes which can be used to categorise the histories. The dominant approach is to use a trigram model for this purpose. A trigram is a unique sequence of three word types. It is sometimes necessary to make a distinction between word types, which are distinct words in the vocabulary, and work tokens, which are instances of word types occurring in a given utterance. It is usually clear, however, which sense is meant.
P(W) =
_{n}
Π
^{i=1}
P(w_{i}|w_{1},...,w_{i-1})
In a trigram model, histories are equivalent if they end in the same two words, so
P(W) =
_{n}
Π
^{i=1}
P(w_{i}|w_{i-2},w_{i-1})
A not insignificant detail (as emerges below), is that the first two terms of this product are actually P(w_{1}) and P(w_{2}|w_{1}), as there are not yet sufficient previous words for the general term to apply.
A maximum likelihood estimate of the trigram probabilities is readily obtained by counting the frequencies of trigrams and bigrams occurring in a set of utterances which is used to train the model:-
P(w_{i}|w_{i-2},w_{i-1}) = C(w_{i-2},w_{i-1},w_{i})
C(w_{i-2},w_{i-1})
The trigram probabilities can be substituted in the above product to give an estimate of the probability of the word sequence W. This overall probability is not actually calculated by a speech recogniser, which only uses the trigram probabilities of the sequences explored by the dynamic search algorithm.
P(W) = C(w_{1})
C( )× C(w_{1},w_{2})
C(w_{1})× C(w_{1},w_{2},w_{3})
C(w_{1},w_{2})× C(w_{2},w_{3},w_{4})
C(w_{2},w_{3})×. . .× C(w_{n-2},w_{n-1},w_{n})
C(w_{n-2},w_{n-1})= C(w_{1},w_{2},w_{3})
C( )× C(w_{2},w_{3},w_{4})
C(w_{2},w_{3})×. . .× C(w_{n-2},w_{n-1},w_{n})
C(w_{n-2},w_{n-1})
The resulting formula does, however, help resolve a paradox. The probability of W had appeared to depend upon a conditional chain which followed the temporal sequence in which the words were uttered, whereas this direction of dependency was not inherent in the model. A trigram count such as C(w_{1},w_{2},w_{3}), for example, can represent the frequency of w_{1} occurring before (w_{2},w_{3}) as well as the frequency of w_{3} occurring after (w_{1},w_{2}). After cancellation of the first two "start-up" terms of the product, the remaining terms are symmetrical with respect to a forward or backward direction of dependency. C( ) represents the count of all word tokens in the training set.
For large vocabularies, the number of possible trigrams is very large and many of them are unlikely to occur in the training set. If they are assigned zero probability, however, this precludes recognition of words with corresponding histories. It is therefore necessary for the model to adjust the probability estimates for trigrams which are absent from or under-represented in the training set. This process is called smoothing.
A smoothing algorithm discounts some non-zero counts in order to obtain some probability mass which it reallocates to zero (or low counts). There are algorithms with varying degrees of sophistication:-
The comparative performance of these smoothing algorithms can be evaluated by using the smoothed model probabilities to calculate the cross-entropy of each model, using as test data a sufficiently long sample of the target language. The cross-entropy (and the perplexity) of each model can be no less than the entropy of the language, so the model with the least cross-entropy is the best. Any of the algorithms could obviously be optimal if the smoothed probabilities it predicted happened to match the actual distribution of the language sample. In practice, the best smoothing algorithm will be the one with the best underlying assumptions about the distribution of probability distributions across all possible languages.
Various smoothing algorithm, and their underlying assumptions, will now be considered in turn.
Each count is incremented by one before calculating probabilities by relative frequency. This deceptively simple algorithm seems rather arbitrary, and it is sometimes proposed that a different constant, for example 0.5 or 0.001 is added to each count. The value of one, however, was shown by Laplace to give the correct Bayesian estimate on the prior assumption that each possible combination of frequency counts is equally likely. This is known as Laplace's Law of Succession.
In the unigram case, with a vocabulary of V word types, and count C(w_{i}) for each word type in a sample of N word tokens, the smoothed probabilities are given by
P_{LA}(w_{i}) = C(w_{i}) + 1
N + V
Generalising this to n-grams gives
P_{LA}(w_{i}|w_{i-n+1},...,w_{i-1}) = C(w_{i-n+1},...,w_{i}) + 1
C(w_{i-n+1},...,w_{i-1}) + V
where C( ) = N for unigrams, where all terms vanish from the count in the denominator.
In a typical corpus, V will be quite large (~50,000 word types), and will dominate low bigram or unigram counts in the denominator, with drastic effects on the corresponding relative frequencies. Add-one smoothing is not very effective in this case.
Addition of a constant less than one is known as Lidstone's Law of Succession:-
P_{LI}(w_{i}|w_{i-n+1},...,w_{i-1}) = C(w_{i-n+1},...,w_{i}) + δ
C(w_{i-n+1},...,w_{i-1}) + Vδ
This can be revealed as a linear interpolation of the maximum likelihood estimate and the uniform prior 1/V, by substituting
λ = C(w_{i-n+1},...,w_{i-1})
C(w_{i-n+1},...,w_{i-1}) + Vδ
giving
P_{LI}(w_{i}|w_{i-n+1},...,w_{i-1}) = λ C(w_{i-n+1},...,w_{i})
C(w_{i-n+1},...,w_{i-1})+ (1 - λ) 1
V= λ P(w_{i}|w_{i-n+1},...,w_{i-1}) + (1 - λ) 1
V
In this method, due to Jelinek and Mercer, the probability is estimated from a linear combination of trigram, bigram, and unigram relative frequencies:-
This formulation only avoids zero probabilities if the vocabulary is confined to words in the training set, or to those words plus a catch-all unknown word. To allow for a larger vocabulary, the interpolation should include the uniform distribution as a zero-order model:-
P_{JM}(w_{i}|w_{i-2},w_{i-1}) = λ_{3}P(w_{i}|w_{i-2},w_{i-1}) + λ_{2}P(w_{i}|w_{i-1}) + λ_{1}P(w_{i})
P_{JM}(w_{i}|w_{i-2},w_{i-1}) = λ_{3}P(w_{i}|w_{i-2},w_{i-1}) + λ_{2}P(w_{i}|w_{i-1}) + λ_{1}P(w_{i}) + λ_{0} 1
V
It is clear that Laplace's and Lidstone's Laws of Succession are just special cases of linear interpolation with the intermediate weights set to zero.
The weights are estimated using held-out data which is different from the data used to calculate the n-gram frequencies. The Baum-Welch algorithm can be used to estimate the values of the weights which maximise the probability of the held-out data. Although linear smoothing performs quite well with a single weight for each order of n-gram, it is usually implemented with multiple weights. In the general case, the algorithm can be expressed recursively:-
P_{JM}(w_{i}|w_{i-n+1},...,w_{i-1}) = λ_{wi-n+1,...,wi-1}P(w_{i}|w_{i-n+1},...,w_{i-1})+ (1 - λ_{wi-n+1,...,wi-1}) P_{JM}(w_{i}|w_{i-n+2},...,w_{i-1})
It is not really feasible, however, to estimate separate weights for each n-gram, so all the n-grams with the same count C(w_{i-n+1},...,w_{i-1}) are given the same weight. In practice the n-grams are partitioned into a few hundred buckets, each of which contains n-grams with counts in a certain range, and all the n-grams in each bucket are allocated the same weight.
For some obscure reason, this method is often referred to as deleted interpolation. When implementing Jelinek-Mercer smoothing, however, Chen and Goodman used two different methods of training the weights: one using held-out test data, and another involving actual deletion of words from the model. It would seem that only the latter method should be called deleted interpolation.
In a set of n-gram counts, there will usually be several unseen n-grams with a zero count, several singletons with a count of one, and so on. Let N_{c} be the frequency of count value c (c = 0,1,2,...). Smoothing will modify each instance of count c by the same amount to give c^{*}. It is usual to normalise the counts so that the total count remains the same, that is
Σ
^{c}cN_{c} =
Σ
^{c}c^{*}N_{c} = N, with
Σ
^{c}N_{c} = V
This allows the smoothed n-gram probabilities to be obtained simply by substituting the modified counts for the original counts in the maximum likelihood estimates. For add-one smoothing, normalisation of the smoothed counts gives
c^{*} = (c + 1) N
N + V
After normalisation of add-one smoothing, some counts (c < N/V) will increase and some (c > N/V) will decrease. For bigram and trigram counts, where V becomes vocabulary^{2} or vocabulary^{3}, N << V for most training sets, and all counts except c = 0 will decrease. It is natural, therefore, to regard smoothing as a discounting of the non-zero counts for redistribution to the zero counts. The discount coefficient d_{c} is the factor which reduces the original count to the modified count:
c^{*} = cd_{c}
Although any method of smoothing can be regarded as discounting, the term is usually reserved for methods which are conceived directly in terms of discounts. These methods are not used by themselves, but are combined with back-off methods, as will be described below.
Each count is reduced by subtracting a fraction of the count:
d_{c} = 1 - α
The amount redistributed to zero counts by this method is αN. To assign the same probability to n-grams with zero counts as Good-Turing discounting (described below), α can be set to give
d_{c} = 1 - N_{1}
N
Each count is reduced by subtracting a constant b:
d_{c} = c - b
c
The amount redistributed to zero counts by this method is b(V - N_{0}).
For each count value c, which occurs with frequency N_{c}, the total of the n-gram counts with this value is cN_{c}. Good-Turing discounting redistributes the total count for singletons to the zero counts, redistributes the total count for doubletons to singletons, and so on for increasing count values. This is done by calculating the smoothed counts according to
c^{*} = (c + 1) N_{c + 1}
N_{c}
This redistribution is clearly shown in the table below, where the entire column of values labelled cN_{c} is shifted up by one row in the column labelled c^{*}N_{c}. This leaves the total n-gram count N the same, as shown at the foot of these columns, so the smoothed counts need no further normalisation. The table also reveals an anomaly that occurs at the bottom of the table: the most frequent count (c = 10) is smoothed to zero. The data in this table has been arbitrarily truncated to give a more compact example. There are presumably a few hundred more counts in the actual (unpublished) data before the frequency finally falls to zero. But the problem exemplified in this table will always occur. In practice, as the frequencies become very low there may be more than one count with zero frequency before the maximum count value is reached.
Implementations of Good-Turing discounting avoid the zero frequency problem by only discounting the counts up to a certain threshold (usually c = 5). This is also justifiable on the basis that the larger counts are more reliable. There are two approaches to maintaining normalisation of the smoothed counts when Good-Turing discounting is confined to the range up to and including the threshold value k. Both methods maintain the assignment to the zero counts of a total count equal to the singleton frequency N_{1} (which results from c^{*}N_{c} = N_{1} for c = 0).
Jelinek leaves the smoothed counts alone and normalises the total count back to N by discounting the counts above the threshold by α, where
α =
Σ
^{c>k+1}cN_{c}
c^{J} = αc, c > kΣ
^{c>k}cN_{c}
This method is shown in columns c^{J} and c^{J}N_{c} of the table, which only differ from columns c^{*} and c^{*}N_{c} below the line representing the threshold.
Katz leaves the unsmoothed counts alone, and normalises the total count back to N by discounting the relative Good-Turing discounts up to and including the threshold by μ, where
μ = N_{1}
N_{1} - (k + 1)N_{k+1}and the Good-Turing discount d_{c}^{*} is c^{*}
c.
Note that the discount coefficient μ is applied to the relative discount 1 - d_{c}^{*}, which is a discount in the more usual sense of 10% off, for example, not a discount coefficient as used elsewhere:-
1 - d_{c}^{K} = μ(1- d_{c}^{*}), where d_{c}^{K} is the Katz discount giving c^{K} = cd_{c}^{K}, 0 < c ≤ k.
This method is shown in columns c^{K} and c^{K}N_{c} of the table, which only differ from columns c and cN_{c} above the line representing the threshold. The table also shows the Good-Turing and Katz relative discounts in the rightmost columns.
The effects of the various implementations of Good-Turing discounting are shown in the following table using bigram data computed by Church and Gale from 22 million words collected from the Associated Press newswire. This is the same example used to illustrate Good-Turing discounting by Jelinek and by Jurafsky and Martin.
N_{c} | c | cN_{c} | c^{*} | c^{*}N_{c} | c^{J} | c^{J}N_{c} | c^{K} | c^{K}N_{c} | 1 - d_{c}^{*} | 1 - d_{c}^{K} |
---|---|---|---|---|---|---|---|---|---|---|
74671100000 | 0 | 0 | 0.000027 | 2018046 | 0.000027 | 2018046 | 0.000027 | 2018046 | ||
2018046 | 1 | 2018046 | 0.446 | 899442 | 0.446 | 899442 | 0.35 | 712368 | 55.4% | 64.7% |
449721 | 2 | 899442 | 1.26 | 566799 | 1.26 | 566799 | 1.14 | 511168 | 37.0% | 43.2% |
188933 | 3 | 566799 | 2.24 | 422672 | 2.24 | 422672 | 2.11 | 398568 | 25.4% | 29.7% |
105668 | 4 | 422672 | 3.24 | 341895 | 3.24 | 341895 | 3.11 | 328386 | 19.1% | 22.3% |
68379 | 5 | 341895 | 4.23 | 289140 | 4.23 | 289140 | 4.10 | 280317 | 15.4% | 18.0% |
48190 | 6 | 289140 | 5.19 | 249963 | 4.48 | 216132 | 6 | 289140 | 13.5% | |
35709 | 7 | 249963 | 6.21 | 221680 | 5.23 | 186848 | 7 | 249963 | 11.3% | |
27710 | 8 | 221680 | 7.24 | 200520 | 5.98 | 165706 | 8 | 221680 | 9.5% | |
22280 | 9 | 200520 | 8.25 | 183810 | 6.73 | 149889 | 9 | 200520 | 8.3% | |
18381 | 10 | 183810 | 0 | 0 | 7.48 | 137398 | 10 | 183810 | ||
0 | 11 | 0 | ||||||||
5393967 | 5393967 | α = 0.747 | 5393967 | 5393967 | μ = 1.17 |
The count frequencies have been arbitrarily set to zero after c = 10, causing c^{*} to fall to zero for c = 10.
Like interpolation models, backoff models combine counts from a hierarchy of distributions, for example trigram counts, bigram counts, and unigram counts. They differ from interpolation models by only using lower order distributions for n-grams with zero counts. A trigram backoff model can naively be represented by
P_{B}(w_{i}|w_{i-2},w_{i-1}) = { αP(w_{i}|w_{i-2},w_{i-1}) if P(w_{i}|w_{i-2},w_{i-1}) > 0 βP(w_{i}|w_{i-1}) if P(w_{i}|w_{i-2},w_{i-1}) = 0 and P(w_{i}|w_{i-1}) > 0 γP(w_{i}) if P(w_{i}|w_{i-2},w_{i-1}) = 0 and P(w_{i}|w_{i-1}) = 0
As before, P() represents a maximum likelihood estimate of conditional probability obtained from relative frequency. The coefficient α is needed to discount the non-zero trigram model probabilities, which sum to one when w_{i} ranges over all word types. The coefficients β and γ normalise the probabilities obtained from the bigram and unigram models to ensure that the combined probabilities sum to one.
Backoff implementations invariably use discounting, so the coefficient α is not needed, as the the probabilities for the non-zero counts have already been discounted in favour of the zero counts, and no longer sum to one. In the canonical back-off method, due to Katz, only the counts below the threshold will have been discounted: above the threshold the maximum likelihood estimates are used directly.
The other coefficients β and γ are not constant, and depend on the context of w_{i}, which is w_{i-2},w_{i-1} for trigrams and w_{i-1} for bigrams. In a trigram model, for example, different coefficients β_{up,and} and β_{down,and} would be needed to reflect the different subsets of the bigram probabilities P(w_{i}|and) to which backoff would occur for these two different contexts.
Incorporating these observations about the coefficients, and using a generalised recursive formulation, the Katz model for backoff with discounting is
P_{KATZ}(w_{i}|w_{i-n+1},...,w_{i-1}) = {
P_{K}(w_{i}|w_{i-n+1},...,w_{i-1}) if C(w_{i-n+1},...,w_{i}) > 0 α_{wi-n+1,...,wi-1}P_{KATZ}(w_{i}|w_{i-n+2},...,w_{i-1}) if C(w_{i-n+1},...,w_{i}) = 0
P_{KATZ}() is a probability calculated using counts c^{K} discounted by the Katz algorithm described above.
When using this method, all of the trigram, bigram and unigram counts are first smoothed using frequencies obtained from the training set. Conditional probabilities are then calculated using subsets of these smoothed counts.
This is a form of absolute discounting which can be used either in a backoff model, as originally described by Kneser and Ney, or in a interpolation model, as implemented by Chen and Goodman. The special feature is the conditioning of lower-order models by context. Some unigrams, such as "Francisco", will only occur in a limited number of contexts such as "San Francisco". If these contexts are common in the training set, the unigram count will be corresponding high, even though it is unlikely to occur in any other contexts. Kneser-Ney smoothing bases the backoff distribution on the number of contexts in which a word occurs, instead of the number of occurrences of the word. For a bigram model, the formula is:
P_{KN}(w_{i}|w_{i-1}) = C(w_{i-1},w_{i}) - D
C(w_{i-1})+ λ(w_{i-1}) |{v|C(vw_{i}) > 0}|
Σ_{w}|{v|C(vw) > 0}|
where D is the absolute discount, λ(w_{i-1}) is to normalise the probabilities, and |{v|C(vw_{i}) > 0}| is the number of words v that can be the context of w_{i}.
Some other methods of smoothing n-gram counts are described by Chen and Goodman, including Witten-Bell smoothing, which was developed for text compression, and Church-Gale smoothing, which combines Good-Turing smoothing with bucketing. Kneser and Ney observed that most smoothing algorithms, including all of the interpolation and discounting algorithms mentioned above, can be described by a unified recursive formula
P_{smooth}(w_{i}|w_{i-n+1},...,w_{i-1}) = | { | α(w_{i-n+1},...,w_{i-1}) | if C(w_{i-n+1},...,w_{i}) > 0 |
γ(w_{i-n+1},...,w_{i-1})P_{smooth}(w_{i}|w_{i-n+2},...,w_{i-1}) | if C(w_{i-n+1},...,w_{i}) = 0 |
The appropriate functions α() and γ() for each algorithm are tabulated by Chen and Goodman. They show that it is quite easy to implement an interpolation version of a backoff algorithm, and vice versa.
The entropy of a language is the average amount of information conveyed by all sequences of words in the language:
H(L) = lim
n→∞- 1
nΣ p(w_{1},...,w_{n})log_{2}p(w_{1},...,w_{n}) bits per word
where the summation is over all sequences of words of length n.
If the language is stationary and erogodic (which is assumed to be the case), the entropy can be estimated by taking the average logprob of a single sufficiently long sequence of words in the language:
H(p) = lim
n→∞- 1
n_{n}
Σ
^{i=1}log_{2}p(w_{i}|h_{i}) bits per word
For a given language model, the probabilities m(w_{i}|h_{i}) predicted by the model will differ from the true probabilities, which are unknown. When the entropy of a word sequence is calculated with the model probabilities instead of the true probabilities, it is called the cross-entropy of the model:
H(p,m) = lim
n→∞- 1
n_{n}
Σ
^{i=1}log_{2}m(w_{i}|h_{i}) bits per word
The cross-entropy is an upper bound on the true entropy, and the model with the lowest cross-entropy is closest to the entropy of the language, and therefore the best model.
In language modelling, the usual metric is perplexity, defined as 2^{H(p,m)}. The perplexity indicates the average number of equiprobable choices for the next word in a sequence. When comparing language models, perplexity is calculated using the same test set, which must be distinct from the training set used to construct the model. For the comparison to be valid, each model should have the same vocabulary.
Several toolkits have been developed which can be used to build language models from text files provided by the user. They perform the mechanics of counting n-grams, discounting, calculating weights, and optimising parameters. They support many of the smoothing algorithms described above. To evaluate the models, they calculate the perplexity on a test set. The most widely used toolkits are the CMU-Cambridge Statistical Language Modeling Toolkit and the SRI Language Modeling Toolkit. There is a good description of the CMU-Cambridge toolkit by Clarkson and Rosenfeld.
Chen and Goodman performed an extremely thorough evaluation of language models, including the smoothing algorithms described above and their variants. They carefully implemented each algorithm, resolving and describing many points of detail which were not always clear from the published descriptions of their authors. They evaluated each smoothing algorithm using data from four corpora:
Brown | 1 million words | 53,850 word types | |
North American Business News | 243 million words | 20,000 word types | newspaper text |
Switchboard | 3 million words | 9,800 word types | telephone conversation transcriptions |
Broadcast News | 130 million words | 50,000 word types | TV and radio news transcriptions |
The best smoothing algorithm was Kneser-Ney smoothing, the authors' variant of which consistently outperformed the other algorithms. Next best were the algorithms of Katz and Jelinek-Mercer. Add-one smoothing and additive smoothing performed poorly, only approaching the baseline version of Jelinek-Mercer for very large training sets of 10 million sentences.
N-gram models perform very well, but many researchers, especially linguists, believe there is room for improvement. Frederick Jelinek was famously quoted as saying
“Anytime a linguist leaves the [IBM speech] group, the recognition rate goes up.”
An n-gram model cannot capture many significant features of speech, such as:-
In a research proposal, for example, researchers at UCL Department of Phonetics and Linguistics disparage N-gram smoothing and continue
“...our greatest criticism is the treatment of words as empty symbols, devoid of linguistic function. Words form classes of meaning and function which also affect their collocational probabilities”
Improvements which have been proposed include:-
Predict the probability of a word based a mapping of word histories to equivalence classes. Although N-grams are examples of equivalence classes, class models are usually understood to involve other types of equivalence classes such as parts of speech, or states of a language parser.
Build a dynamic model by caching recently observed words to reflect the current topic of the discourse.
Combine models by linear combination of estimates trained on different clusters of texts.
Use knowledge of the current topic or the current speech act.
The basic concepts of language modelling, in the context of speech recognition, are well explained in text books such as Jelinek (chapter 4) and Jurafsky and Martin (chapter 6). There is also a good survey on the web by Roukos in the NSF Survey of the State of the Art in Human Language Technology.
Joshua Goodman's home page has a link to his PowerPoint slides, which give an up-to-date presentation of "The State of the Art of Language Modeling" in April, 2000.
The definitive reference, however, is Chen and Goodman's excellent paper, which contains a comprehensive tutorial on smoothing algorithms, and presents the results of their extensive testing of different algorithms and models. (Be sure to refer to the 1998 version of this paper, which is considerably expanded and updated, compared to the 1996 version.)
F. Jelinek, Statistical Methods for Speech Recognition, MIT Press, Cambridge, USA, 1998.
Daniel Jurafsky and James H. Martin, SPEECH and LANGUAGE PROCESSING: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, Prentice-Hall, New Jersey, 2000.
S. F. Chen, J. T. Goodman, "An Empirical Study of Smoothing Techniques for Language Modeling", Computer Speech and Language, vol. 13, pp. 359-394, October 1999. (PostScript)
J. T. Goodman, "Putting it all Together: Language Model Combination" ICASSP-2000, Istanbul, June 2000. (PostScript)
S.M. Katz, "Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recogniser", IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP-35, vol. 3, pp 400-401, March 1987.
R. Kneser and H. Ney, "Improved Backing-off for m-gram Language Modeling", Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 1, pp. 181-184, 1995.
P.R. Clarkson and R. Rosenfeld, "Statistical Language Modeling Using the CMU-Cambridge Toolkit", Proceedings ESCA Eurospeech, 1997. (PostScript)
E.S. Ristad, "A Natural Law of Succession", Research Report CS-TR-495-95, Princeton University, 1995. (Acrobat PDF)
The CMU-Cambridge Statistical Language Modeling Toolkit
The SRI Language Modeling Toolkit
The following researchers in the Language modelling field have home pages:
The following links contain relevant references and resources:
Papers on Language Models for Sparse Data
ISIP Bibliography of Useful Research Papers
N-gram language models and smoothing techniques