I have built this using a model with two microservices: a Flask server for backend operations and a Streamlit server for frontend operations. The Flask server handles data processing and sentiment analysis, while the Streamlit server provides a user-friendly interface for interacting with the application.
- Install Dependencies :
pip install -r requirements.txt- Start the Server :
streamlit run index.py-
User Input : The user provides a Twitter username through the Streamlit interface.
-
Flask Endpoint : The username is sent to the Flask server endpoint
/get_sentimentanalysis_overall. -
Scraper Module : The Flask server calls the
scraper.main(username)function to scrape the latest tweets from the provided username. -
Data Processing : The scraped tweets are processed using various utility functions in
utils.py. -
Sentiment Analysis : The processed tweets are analyzed for sentiment using a pre-trained model.
-
Response : The average sentiment score and corresponding sentiment label (Positive, Neutral, Negative) are returned to the Streamlit interface.
Endpoints :
-
GET
/: Returns the homepage. -
POST
/get_sentimentanalysis_overall: Accepts a Twitter username and returns the average sentiment score and label.
-
server.py : Main entry point for the Flask server.
-
utils.py : Contains utility functions for data cleaning and sentiment analysis.
-
scraper.py : Handles scraping tweets from Twitter.
-
valuetofeeling(i): Converts a sentiment score to a textual representation. -
cleanREGEX(raw): Cleans raw text using regex to remove HTML tags and special characters. -
deEmojify(x): Removes emojis from the text. -
remove_punct(text): Removes punctuation and numbers from the text. -
lower_case(df): Converts text to lowercase. -
cleaner(df): Applies a series of cleaning functions to the dataframe. -
sentimentanalyze(tweet): Analyzes the sentiment of a tweet. -
load_saved_artifacts(): Loads the pre-trained model and vectorizer from disk.
