Python NLTK | nltk.tokenize.SpaceTokenizer()
在nltk.tokenize.SpaceTokenizer()
方法的帮助下,我们可以使用tokenize.SpaceTokenizer()
方法根据单词之间的空格从字符串中提取标记。
Syntax : tokenize.SpaceTokenizer()
Return : Return the tokens of words.
示例 #1:
在这个例子中,我们可以看到,通过使用tokenize.SpaceTokenizer()
方法,我们能够将标记从流中提取到它们之间有空格的单词中。
# import SpaceTokenizer() method from nltk
from nltk.tokenize import SpaceTokenizer
# Create a reference variable for Class SpaceTokenizer
tk = SpaceTokenizer()
# Create a string input
gfg = "Geeksfor Geeks.. .$$&* \nis\t for geeks"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
输出 :
[‘Geeksfor’, ‘Geeks..’, ‘.$$&*’, ‘\nis\t’, ‘for’, ‘geeks’]
示例 #2:
# import SpaceTokenizer() method from nltk
from nltk.tokenize import SpaceTokenizer
# Create a reference variable for Class SpaceTokenizer
tk = SpaceTokenizer()
# Create a string input
gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n"
# Use tokenize method
geek = tk.tokenize(gfg)
print(geek)
输出 :
[‘The’, ‘price\t’, ‘of’, ‘burger’, ‘\nin’, ‘BurgerKing’, ‘is’, ‘Rs.36.\n’]