corpora
n. (Corpora)人名;(意)科尔波拉
n. 任何事物之主体;全集
2026-05-05 23:24 浏览次数 17
n. (Corpora)人名;(意)科尔波拉
n. 任何事物之主体;全集
corpus luteum[组织] 黄体;黄体素
Learners corpora学习者语料库
corpora pedunculata具柄体
bilingual corpora双语语料库
corpus cavernosum阴茎海绵体
Raw corpora原始语料库
Corpora processing语料库处理
corpora pyramidale翻译
learner corpora学习者语料库
corpus linguistics[计] 语料库语言学
habeas corpus人身保护权;人身保护法
corpus callosum[解剖] 胼胝体,脑胼胝体
corpora versicolorata淀粉样体
corpus christi基督圣体节
corpora amylacae淀粉状体
it can generate a filter automatically from corpora of categorized messages rather than requiring human effort in rule development.
可以从已分类消息的语言资料库中自动生成过滤器,而不需要人花费精力来制订规则。
this contains the linguistic corpora that are analyzed and processed in the book.
包含本书中分析和处理的语言语料库。
this paper focuses on the use of frequency lists and collocations provided by corpora and the corresponding concordancers in english language teaching.
本文重点分析了语料库及其检索工具所提供的词频表和惯用搭配在英语教学中的应用。
large corpora (masses of text) are a good place to start.
通过大型语料库(海量文本)来检查是个好方法。
hakia is a general purpose semantic search engine, as opposed to e.g. powerset and cognition (below), that search structured corpora (text) like wikipedia.
hakia是一个通用语义搜索引擎,和下面要介绍的powerset,cognition不同,其搜索结果是像维基百科一样的语料库。
some corpora include a wide range of language while others are used to focus on a particular linguistic feature.
根据while这个表示对比或转折的逻辑关系词,可以判断它前后部分是对比或反义关系。 「awiderangeoflanguage」指的是语言研究的广泛的各个方面,可以预判空格处应是指具体的方面。
with a little practice and appropriate textual searches, you can quickly develop an ability to validate source corpora for balanced *alloc() and free(), or new and delete.
通过少量的实践和适当的文本搜索,您能够快速验证平衡的 *alloc()和free()或者new和delete的源主体。
the relation between threshold, weight and matching degree is also discussed. in order to make the querying friendlier, the method to avoid returning null set and corpora is also presented.
研究了权重、阈值和匹配度之间的关系,提出了避免查询结果为空集或全集的方法,使得查询更加人性化。
this 3- to 5.75-inch structure is made of two sandwiched strips of corpora cavernosa, a tissue that engorges with blood and stiffens when its owner is aroused.
这3至5.75英寸由两片三明治一样海绵体构成。 海绵组织在它的主人性唤起时充血,并且变得坚硬。
one fairly simple thing you are likely to do with linguistic corpora is analyze frequencies of various events within them, and make probability predictions based on these known frequencies.
对于语言全集,您可能要做的一件相当简单的事情是分析其中各种事件(events)的频率分布,并基于这些已知频率分布做出概率预测。
modeling the linguistic data found in corpora can help us to understand linguistic patterns, and can be used to make predictions about new language data.
建模语料库中的语言数据可以帮助我们理解语言模型,并且可以用于进行关于新语言数据的预测。
while nltk comes with a number of corpora that have been pre-processed (often manually) to various degrees, conceptually each layer relies on the processing in the adjacent lower layer.
尽管nltk附带了很多已经预处理(通常是手工地)到不同程度的全集,但是概念上每一层 都是依赖于相邻的更低层次的处理。
the new material and technology are used in the experiment, that is, the corpora and the multimedia.
新的教学材料和技术被运用到了本论文实验中---语料库索引和多媒体。
in addition, since distributed blacklists require talking to a server to perform verification, pyzor performed far more slowly against my test corpora than did any other techniques.
此外,由于分布式黑名单需要与服务器进行对话以执行验证,所以,在对我的语言资料库做测试时,pyzor 的执行效率要比其它技术慢得多。
twenty years ago, one half of all citizens in corpora met the standards for adequate physical fitness as then defined by the national advisory board on physical fitness.
20年前,corpora的市民有一半都达到了那时由国家健身顾问委员会所定义的充分的健康标準。
tokenization matters a lot for random text collections; in fairness to nltk, its bundled corpora have been packaged for easy and accurate tokenization with wstokenizer().
断词方法对随机文本集合来说至关重要;公平地讲,nltk捆绑的全集已经通过wstokenizer()打包为易用且準确的断词工具。
some text corpora are categorized, e. g. , by genre or topic; sometimes the categories of a corpus overlap each other.
一些文本语料库进行了分类,例如通过类型或者主题;有时候语料库的类别相互重叠。
but since overall fitness levels are highest in regions of corpora where levels of computer ownership are also highest, it is clear that using computers has not made citizens less physically fit.
但是由于在corpora电脑拥有量最高的地区也是总体健康水平最高的地区,显然使用电脑并没有导致市民体质的下降。
as a basis of this study, the present methods are also discussed and summarized from the angle of corpora in this dissertation.
作为本研究的基础,本文还主要从语料库的角度对现有处理方法进行了讨论和总结。