Multi-threaded HTTP server built using only the Python standard library
Server instances compatible with reverse proxy
- HTTP/1.1 subset
- Serves Static Files: Delivers HTML, CSS, JS, image, json, txt, and other data formats
- Support for IPv4 connections via TCP
- Handles multiple client requests simultaneously with multi-threading (thread-per-request model)
- HTTP Methods: Support for
GET,POST(subset),OPTIONS, andHEADmethods - HTTP Headers: Support for
Content-Type,Content-Length,Connection,ETag,If-None-Match,Date,Last-Modified,Host, andUser-Agentheaders - HTTP Responses: Support for
200 OK,204 No Content,304 Not Modified,400 Bad Request,403 Forbidden,404 Not Found,405 Method Not Allowed,500 Internal Server Error,501 Not Implemented, and505 HTTP Version Not Supportedmessages- Correct error handling for path traversal vulnerabilities (
403 Forbidden) and permissions for dynamic files (405 Method Not Allowed) - Correctly returns
304 Not Modifiedif strong ETag is provided and validated
- Correct error handling for path traversal vulnerabilities (
- Persistent connections with keep-alive
With Python 3.x
- Clone this repository:
git clone https://github.com/davidmenggx/simple-http && cd simple-http- Run the server:
python3 server.pyOptional: Specify port number, keepalive time (seconds), and verbose
Defaults to port 8080, 5 seconds keep-alive, no logging
python3 server.py --port 5678 --keepalive 30 -vBase directory is public/
Dynamic assets in api/ allow POST, GET, OPTIONS, HEAD
All other static assets allow GET, OPTIONS, HEAD
public/
├── api/
│ ├── comments.json
│ └── public_file.txt
├── data /
│ ├── users.json
│ └── videos.json
├── images /
│ └── img1.jpg
├── index.html
├── script.js
└── styles.css
Fetch index.html (using default port 8080):
curl http://localhost:8080/Visit the front page: http://localhost:8080 (or the port you specified)
Make a post to api/public_file.txt:
curl -i -X POST -d "Hello from GitHub" http://localhost:8080/api/public_file.txtNow read api/public_file.txt:
curl -i http://localhost:8080/api/public_file.txtUsing Locust*, one server instance was able to achieve up to 2500 RPS, 40 ms median latency, and 80 ms p99 latency with 100 concurrent connections using persistent connections**
*Running on a Dockerized environment on my laptop (8-core Intel Ultra 7 with 32 GB of RAM on Windows 11)
**For reference, without persistent connections I am only able to achieve 1000 RPS
