Python – Tokenization

Python – Token Segmentation

In Python, tokenization is essentially breaking a larger text into smaller lines, words, or even tokens for non-English languages. The various tokenization functions built into the nltk module can be used in programs like the one below.

Line Tokenization

In the following example, we use the sent_tokenize function to break a given text into different lines.

import nltk
sentence_data = "The First sentence is about Python. The Second: about Django. You can learn Python,Django and Data Ananlysis here. "
nltk_tokens= nltk.sent_tokenize(sentence_data)
print(nltk_tokens)

When we run the above program, we get the following output:

['The First sentence is about Python.', 'The Second: about Django.', 'You can learn Python, Django, and Data Ananlysis here.']

Non-English Token Segmentation

In the following example, we tokenize German text.

import nltk

german_tokenizer = nltk.data.load('tokenizers/punkt/german.pickle')
german_tokens = german_tokenizer.tokenize('Wie geht es Ihnen? Gut, danke.')
print(german_tokens)

When we run the above program, we get the following output:

['Wie geht es Ihnen?', 'Gut, danke.']

Word Tokenization

We use the word_tokenize function from nltk to tokenize the words.

import nltk

word_data = "It originated from the idea that there are readers who prefer learning new skills from the comforts of their drawing rooms"
nltk_tokens = nltk.word_tokenize(word_data)
print(nltk_tokens)

When we run the above program, we get the following output-

['It', 'originated', 'from', 'the', 'idea', 'that', 'there', 'are', 'readers',
'who', 'prefer', 'learning', 'new', 'skills', 'from', 'the',
'comforts', 'of', 'their', 'drawing', 'rooms']

Leave a Reply

Your email address will not be published. Required fields are marked *