CHINESE_VGRAM: a lexer for extracting tokens from Chinese text.
CHINESE_VGRAM:一个从Chinese文本在中提取标记的lexer。
Similarly, the lexer has to know about the structure of continuations; the parser only knows that it sometimes gets additional text to add on to a header.
与此类似,lexer必须明白附加部分的结构;解析器只知道有时它需要将附加的文本添加到标题中。
A listing of supported lexer types follows.
下面是一个支持lexer类型的清单。
These directives are used to define the tokens the lexer can return.
这些指令用来定义lexer可以返回的记号。
The lexer actually does some of the work of figuring out where in a message it is, but the parser still ties everything together.
lexer实际上完成了指出它位于消息中的哪个地方的一些任务,但是解析器仍 将所有内容放在一起。
For lexing purposes, this means that the lexer definition of an integer number, for example, can be used to build the lexer definition of a real number and a fraction.
例如,对于词法分析目的而言,这意味着可以用整数的词法分析器定义来构建实数和分数的词法分析器定义。
A lexer is a software component that divides text strings into individual words, or tokens, so that the individual words can be indexed.
lexer是一个能将文本字符串分成单个命令,或者标记的软件组分,这样使得单个命令可以被检索到。
BASIC_LEXER: The lexer for English and most western European languages that use white space delimited words.
BASIC _lexer:English以及大多数西方欧洲语言的lexers使用白色空白分隔命令。
These Numbers will be outside the range of possible valid characters; thus, the lexer can return individual characters as themselves, or special tokens using these defined values.
这些数字将会在可能的合法字符范围之外;这样,lexer可以原样返回单个字符本身,或者返回使用这些定义值的记号。
A good lexer example can help a lot with learning how to write a tokenizer.
一个好的lexer例子会非常有助于学习如何编写断词器(tokenizer)。
JAPANESE_VGRAM: a lexer for extracting tokens from Japanese text.
JAPANESE_VGRAM:一个从Japanese文本在中提取标记的lexer。
Thus, p: : RD on top of Perl 5 is a powerful parser and lexer combination.
因此,Perl5之上的P:RD是一个解析器与词法分析器的强大组合。
KOREAN_LEXER: a lexer for extracting tokens from Korean text.
KOREAN_LEXER:一个从Korean文本在中提取标记的lexer。
It USES start states — a feature allowing the lexer to match some rules only some of the time.
它使用了start声明——这项功能使lexer能够只在一些时候去匹配某些规则。
Why does the lexer rule for strings takes precedence over all my other rules?
为什么字符串的词法规则优先于我的所有其他的规则?
The lexer is the part of our language knowledge that says 「this is a sentence; this is punctuation; twenty-three is a single word.」
词法分析器是我们的语言知识中识别「这是一个句子;这是标点;t wenty - three是一个单一的词」的那一部分。
JAPANESE_LEXER: a lexer for extracting tokens from Japanese text.
JAPANESE_LEXER:一个从Japanese文本在中提取标记的lexer。
This does mean that the lexer and the parser both have to know that an empty line separates a header from a body.
这确实意味着lexer和解析器都必须明白,是一个空行将标题从主体中分离了出来。
KOREAN_MORPH_LEXER: a lexer for extracting tokens from Korean text.
Korean _ MORPH_LEXER:一个从Korean文本在中提取标记的lexer。
How to use an Alex monadic lexer with Happy?
如何使用一个亚历克斯一元词法分析器与快乐吗?