Vector space models are well-defined mathematical representation framework that have been widely used in text analytics. In order to deliver a solution for problems that require a minimal level of text understanding, in these models, text units are represented by high-dimensional vectors. The constructed vector spaces are endowed with a norm structure and a distance formula is employed to compute the similarity of vectors, thus, the similarity of the text units that they represent. The high dimensionality of the vectors, however, is a barrier to the performance of these models. We introduce Random Manhattan Indexing (RMI) for the construction of L1 normed vector space models of semantics at reduced dimension. RMI is a two-step incremental method of vector space construction that employs a sparse stable random projection to achieve its objective.