Skip to content
Advertisement

Tag: nltk

Resource reuters not found

I’m using windows system, python 3.7 when I install: it has no problem to import, and I also already install nltk in my cmd but when I conduct the code: it has some Error, and I don’t know how to fix it… However, the code works well on my MacBook I’m wondering what’s going on with the windows system p.s

Counting specific words in a sentence

I am currently trying to solve this homework question. My task is to implement a function that returns a vector of word counts in a given text. I am required to split the text into words then use NLTK’s tokeniser to tokenise each sentence. This is the code I have so far: There are two doctests that should give the

ValueError: Could not find a default download directory of nltk

I have problem on import nltk. I configured apache and run some sample python code, it worked well on the browser. The URL is : /localhost/cgi-bin/test.py. When I import the nltk in test.py its not running. The execution not continue after the “import nltk” line.And it gives me that error ValueError: Could not find a default download directory But when

ntlk: how to get inflections of words

I have a list of words, nearly 5000 English words, and for each word I need these inflectional forms: noun: singular and plural verb: infinitive, present simple, present simple 3rd person, past simple, present participle (ing form), past participle adjective: comparative and superlative adverb How can I extract these information from a given word (e.g. help) in ntlk via python?

how to use word_tokenize in data frame

I have recently started using the nltk module for text analysis. I am stuck at a point. I want to use word_tokenize on a dataframe, so as to obtain all the words used in a particular row of the dataframe. Basically, i want to separate all the words and find the length of each text in the dataframe. I know

Advertisement