There are three formats for the
n-grams. The examples below are for 3-grams. If it were a 2-grams,
4-gram, or 5-gram, it would have fewer (2-grams) or more (4,
Words. Does not include punctuation or numbers, and
does not include the part of speech of the words
Words+ + PoS. "Word" is anything: word, number,
punctuation. Includes the part of speech of the words (the first letter of the
Database. This is the most "complicated" version,
but perhaps also the most powerful. Each word is represented as an integer
value, and the value of these integer values is found in the "lexicon" file
(where it indicates the case sensitive word form, the lemma, and the part of
The leftmost column in all of the n-grams tables is
the frequency of the n-grams. The other columns are the integer values for the
words (two columns for 2-grams, three for 3-grams, etc).
Each number corresponds to an entry in the [lexicon] table. For example, the
three entries , , and  in the lexicon table are:
||word (case sensitive)
||part of speech (info)|
This means that the first entry in the 3-grams table above is for the string
[ most of the ], which occurs 59,319 times in the corpus.
Note that you would be responsible for creating the SQL statements to group by lemma, word, PoS,
etc and to limit and sort the data. We assume a good knowledge of SQL, as well
as the ability to create the databases and tables from the CSV files.