LiveMirror/jieba: 结巴中文分词做最好的Python分词组件 - GitHub

文章推薦指數: 80 %
投票人數:10人

Contribute to LiveMirror/jieba development by creating an account on GitHub. ... jieba.cut_for_search 方法接受一个参数:需要分词的字符串,该方法适合用于搜索 ... Skiptocontent {{message}} LiveMirror / jieba Public Notifications Fork 16 Star 25 结巴中文分词做最好的Python分词组件 git.oschina.net/fxsjy/jieba 25 stars 16 forks Star Notifications Code Issues 0 Pullrequests 0 Actions Projects 0 Wiki Security Insights More Code Issues Pullrequests Actions Projects Wiki Security Insights LiveMirror/jieba Thiscommitdoesnotbelongtoanybranchonthisrepository,andmaybelongtoaforkoutsideoftherepository. master Branches Tags Couldnotloadbranches Nothingtoshow {{refName}} default Couldnotloadtags Nothingtoshow {{refName}} default 1 branch 0 tags Code Latestcommit   Gitstats 189 commits Files Permalink Failedtoloadlatestcommitinformation. Type Name Latestcommitmessage Committime extra_dict     jieba     test     .gitattributes     .gitignore     Changelog     README.md     setup.py     Viewcode jieba Feature 在线演示 Python2.x下的安装 Python3.x下的安装 Algorithm 功能1):分词 功能2):添加自定义词典 功能3):关键词提取 功能4):词性标注 功能5):并行分词 其他词典 模块初始化机制的改变:lazyload(从0.28版本开始) 分词速度 常见问题 ChangeLog jieba Features Usage Algorithm Function1):cut Codeexample:segmentation Function2):Addacustomdictionary Function3):KeywordExtraction UsingOtherDictionaries Initialization Segmentationspeed Onlinedemo README.md jieba "结巴"中文分词:做最好的Python中文分词组件 "Jieba"(Chinesefor"tostutter")Chinesetextsegmentation:builttobethebestPythonChinesewordsegmentationmodule. ScrolldownforEnglishdocumentation. Feature 支持三种分词模式: 精确模式,试图将句子最精确地切开,适合文本分析; 全模式,把句子中所有的可以成词的词语都扫描出来,速度非常快,但是不能解决歧义; 搜索引擎模式,在精确模式的基础上,对长词再次切分,提高召回率,适合用于搜索引擎分词。

支持繁体分词 支持自定义词典 在线演示 http://jiebademo.ap01.aws.af.cm/ (PoweredbyAppfog) Python2.x下的安装 全自动安装:easy_installjieba或者pipinstalljieba 半自动安装:先下载http://pypi.python.org/pypi/jieba/,解压后运行pythonsetup.pyinstall 手动安装:将jieba目录放置于当前目录或者site-packages目录 通过importjieba来引用 Python3.x下的安装 目前master分支是只支持Python2.x的 Python3.x版本的分支也已经基本可用:https://github.com/fxsjy/jieba/tree/jieba3k gitclonehttps://github.com/fxsjy/jieba.git gitcheckoutjieba3k pythonsetup.pyinstall Algorithm 基于Trie树结构实现高效的词图扫描,生成句子中汉字所有可能成词情况所构成的有向无环图(DAG) 采用了动态规划查找最大概率路径,找出基于词频的最大切分组合 对于未登录词,采用了基于汉字成词能力的HMM模型,使用了Viterbi算法 功能1):分词 jieba.cut方法接受两个输入参数:1)第一个参数为需要分词的字符串2)cut_all参数用来控制是否采用全模式 jieba.cut_for_search方法接受一个参数:需要分词的字符串,该方法适合用于搜索引擎构建倒排索引的分词,粒度比较细 注意:待分词的字符串可以是gbk字符串、utf-8字符串或者unicode jieba.cut以及jieba.cut_for_search返回的结构都是一个可迭代的generator,可以使用for循环来获得分词后得到的每一个词语(unicode),也可以用list(jieba.cut(...))转化为list 代码示例(分词) #encoding=utf-8 importjieba seg_list=jieba.cut("我来到北京清华大学",cut_all=True) print"FullMode:","/".join(seg_list)#全模式 seg_list=jieba.cut("我来到北京清华大学",cut_all=False) print"DefaultMode:","/".join(seg_list)#精确模式 seg_list=jieba.cut("他来到了网易杭研大厦")#默认是精确模式 print",".join(seg_list) seg_list=jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造")#搜索引擎模式 print",".join(seg_list) Output: 【全模式】:我/来到/北京/清华/清华大学/华大/大学 【精确模式】:我/来到/北京/清华大学 【新词识别】:他,来到,了,网易,杭研,大厦(此处,“杭研”并没有在词典中,但是也被Viterbi算法识别出来了) 【搜索引擎模式】:小明,硕士,毕业,于,中国,科学,学院,科学院,中国科学院,计算,计算所,后,在,日本,京都,大学,日本京都大学,深造 功能2):添加自定义词典 开发者可以指定自己自定义的词典,以便包含jieba词库里没有的词。

虽然jieba有新词识别能力,但是自行添加新词可以保证更高的正确率 用法:jieba.load_userdict(file_name)#file_name为自定义词典的路径 词典格式和dict.txt一样,一个词占一行;每一行分三部分,一部分为词语,另一部分为词频,最后为词性(可省略),用空格隔开 范例: 自定义词典:https://github.com/fxsjy/jieba/blob/master/test/userdict.txt 用法示例:https://github.com/fxsjy/jieba/blob/master/test/test_userdict.py 之前:李小福/是/创新/办/主任/也/是/云/计算/方面/的/专家/ 加载自定义词库后: 李小福/是/创新办/主任/也/是/云计算/方面/的/专家/ "通过用户自定义词典来增强歧义纠错能力"---fxsjy/jieba#14 功能3):关键词提取 jieba.analyse.extract_tags(sentence,topK)#需要先importjieba.analyse setence为待提取的文本 topK为返回几个TF/IDF权重最大的关键词,默认值为20 代码示例(关键词提取) https://github.com/fxsjy/jieba/blob/master/test/extract_tags.py 功能4):词性标注 标注句子分词后每个词的词性,采用和ictclas兼容的标记法 用法示例 >>>importjieba.possegaspseg >>>words=pseg.cut("我爱北京天安门") >>>forwinwords: ...printw.word,w.flag ... 我r 爱v 北京ns 天安门ns 功能5):并行分词 原理:将目标文本按行分隔后,把各行文本分配到多个python进程并行分词,然后归并结果,从而获得分词速度的可观提升 基于python自带的multiprocessing模块,目前暂不支持windows 用法: jieba.enable_parallel(4)#开启并行分词模式,参数为并行进程数 jieba.disable_parallel()#关闭并行分词模式 例子: https://github.com/fxsjy/jieba/blob/master/test/parallel/test_file.py 实验结果:在4核3.4GHzLinux机器上,对金庸全集进行精确分词,获得了1MB/s的速度,是单进程版的3.3倍。

其他词典 占用内存较小的词典文件 https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.small 支持繁体分词更好的词典文件 https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.big 下载你所需要的词典,然后覆盖jieba/dict.txt即可或者用jieba.set_dictionary('data/dict.txt.big') 模块初始化机制的改变:lazyload(从0.28版本开始) jieba采用延迟加载,"importjieba"不会立即触发词典的加载,一旦有必要才开始加载词典构建trie。

如果你想手工初始jieba,也可以手动初始化。

importjieba jieba.initialize()#手动初始化(可选) 在0.28之前的版本是不能指定主词典的路径的,有了延迟加载机制后,你可以改变主词典的路径: jieba.set_dictionary('data/dict.txt.big') 例子:https://github.com/fxsjy/jieba/blob/master/test/test_change_dictpath.py 分词速度 1.5MB/SecondinFullMode 400KB/SecondinDefaultMode TestEnv:Intel(R)Core(TM)[email protected];《围城》.txt 常见问题 1)模型的数据是如何生成的?fxsjy/jieba#7 2)这个库的授权是?fxsjy/jieba#2 更多问题请点击:https://github.com/fxsjy/jieba/issues?sort=updated&state=closed ChangeLog https://github.com/fxsjy/jieba/blob/master/Changelog jieba "Jieba"(Chinesefor"tostutter")Chinesetextsegmentation:builttobethebestPythonChinesewordsegmentationmodule. Features Supportthreetypesofsegmentationmode: AccurateMode,attempttocutthesentenceintothemostaccuratesegmentation,whichissuitablefortextanalysis; FullMode,breakthewordsofthesentenceintowordsscanned SearchEngineMode,basedontheAccurateMode,withanattempttocutthelongwordsintoseveralshortwords,whichcanenhancetherecallrate Usage Fullyautomaticinstallation:easy_installjiebaorpipinstalljieba Semi-automaticinstallation:Downloadhttp://pypi.python.org/pypi/jieba/,afterextractingrunpythonsetup.pyinstall Manutalinstallation:placethejiebadirectoryinthecurrentdirectoryorpythonsite-packagesdirectory. Useimportjiebatoimport,whichwillfirstbuildtheTrietreeonlyonfirstimport(takesafewseconds). Algorithm BasedontheTrietreestructuretoachieveefficientwordgraphscanning;sentencesusingChinesecharactersconstituteadirectedacyclicgraph(DAG) Employsmemorysearchtocalculatethemaximumprobabilitypath,inordertoidentifythemaximumtangentialpointsbasedonwordfrequencycombination Forunknownwords,thecharacterpositionHMM-basedmodelisused,usingtheViterbialgorithm Function1):cut Thejieba.cutmethodacceptstoinputparameters:1)thefirstparameteristhestringthatrequiressegmentation,andthe2)secondparameteriscut_all,aparameterusedtocontrolthesegmentationpattern. jieba.cutreturnedstructureisaniterativegenerator,whereyoucanuseaforlooptogetthewordsegmentation(inunicode),orlist(jieba.cut(...))tocreatealist. jieba.cut_for_searchaccpetsonlyonparameter:thestringthatrequiressegmentation,anditwillcutthesentenceintoshortwords Codeexample:segmentation #encoding=utf-8 importjieba seg_list=jieba.cut("我来到北京清华大学",cut_all=True) print"FullMode:","/".join(seg_list)#全模式 seg_list=jieba.cut("我来到北京清华大学",cut_all=False) print"DefaultMode:","/".join(seg_list)#默认模式 seg_list=jieba.cut("他来到了网易杭研大厦") print",".join(seg_list) seg_list=jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造")#搜索引擎模式 print",".join(seg_list) Output: [FullMode]:我/来到/北京/清华/清华大学/华大/大学 [AccurateMode]:我/来到/北京/清华大学 [UnknownWordsRecognize]他,来到,了,网易,杭研,大厦(Inthiscase,"杭研"isnotinthedictionary,butisidentifiedbytheViterbialgorithm) [SearchEngineMode]:小明,硕士,毕业,于,中国,科学,学院,科学院,中国科学院,计算,计算所,后,在 ,日本,京都,大学,日本京都大学,深造 Function2):Addacustomdictionary Developerscanspecifytheirowncustomdictionarytoincludeinthejiebathesaurus.jiebahastheabilitytoidentifynewwords,butaddingyourownnewwordscanensureahigherrateofcorrectsegmentation. Usage:jieba.load_userdict(file_name)#file_nameisacustomdictionarypath Thedictionaryformatisthesameasthatofanalyse/idf.txt:onewordperline;eachlineisdividedintotwoparts,thefirstistheworditself,theotheristhewordfrequency,separatedbyaspace Example: 云计算5 李小福2 创新办3 之前:李小福/是/创新/办/主任/也/是/云/计算/方面/的/专家/ 加载自定义词库后: 李小福/是/创新办/主任/也/是/云计算/方面/的/专家/ Function3):KeywordExtraction jieba.analyse.extract_tags(sentence,topK)#needstofirstimportjieba.analyse setence:thetexttobeextracted topK:ToreturnseveralTF/IDFweightsforthebiggestkeywords,thedefaultvalueis20 Codesample(keywordextraction) https://github.com/fxsjy/jieba/blob/master/test/extract_tags.py UsingOtherDictionaries ItispossibletosupplyJiebawithyourowncustomdictionary,andtherearealsotwodictionariesreadilyavailablefordownload: Youcanemployasmallerdictionaryforasmallermemoryfootprint: https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.small Thereisalsoabiggerfilethathasbettersupportfortraditionalcharacters(繁體): https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.big Bydefault,anin-betweendictionaryisused,calleddict.txtandincludedinthedistribution. Ineithercase,downloadthefileyouwantfirst,andthencalljieba.set_dictionary('data/dict.txt.big')orjustreplacetheexistingdict.txt. Initialization Bydefault,Jiebaemployslazyloadingtoonlybuildthetrieonceitisnecessary.Thistakes1-3secondsonce,afterwhichitisnotinitializedagain.IfyouwanttoinitializeJiebamanually,youcancall: importjieba jieba.initialize()#(optional) Youcanalsospecifythedictionary(notsupportedbeforeversion0.28): jieba.set_dictionary('data/dict.txt.big') Segmentationspeed 1.5MB/SecondinFullMode 400KB/SecondinDefaultMode TestEnv:Intel(R)Core(TM)[email protected];《围城》.txt Onlinedemo http://jiebademo.ap01.aws.af.cm/ (PoweredbyAppfog) About 结巴中文分词做最好的Python分词组件 git.oschina.net/fxsjy/jieba Resources Readme Stars 25 stars Watchers 2 watching Forks 16 forks Releases Noreleasespublished Packages0 Nopackagespublished Contributors5 Languages Python 100.0% Youcan’tperformthatactionatthistime. Yousignedinwithanothertaborwindow.Reloadtorefreshyoursession. Yousignedoutinanothertaborwindow.Reloadtorefreshyoursession.



請為這篇文章評分?