Python NLTK |标记化.WordPunctTokenizer()
借助nltk.tokenize.WordPunctTokenizer()()
方法,我们可以使用tokenize.WordPunctTokenizer()()
方法从字母和非字母字符形式的单词或句子字符串中提取标记。
Syntax : tokenize.WordPunctTokenizer()()
Return : Return the tokens from a string of alphabetic or non-alphabetic character.
示例 #1:
在这个例子中,我们可以看到通过使用tokenize.WordPunctTokenizer()()
方法,我们能够从字母或非字母字符流中提取标记。
# import WordPunctTokenizer() method from nltk
from nltk.tokenize import WordPunctTokenizer
# Create a reference variable for Class WordPunctTokenizer
tk = WordPunctTokenizer()
# Create a string input
gfg = "GeeksforGeeks...$$&* \nis\t for geeks"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
输出 :
[‘GeeksforGeeks’, ‘…$$&*’, ‘is’, ‘for’, ‘geeks’]
示例 #2:
# import WordPunctTokenizer() method from nltk
from nltk.tokenize import WordPunctTokenizer
# Create a reference variable for Class WordPunctTokenizer
tk = WordPunctTokenizer()
# Create a string input
gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
输出 :
[‘The’, ‘price’, ‘of’, ‘burger’, ‘in’, ‘BurgerKing’, ‘is’, ‘Rs’, ‘.’, ’36’, ‘.’]