Skip to content
Advertisement

Is it possible to send HTTP requests from inside a running Docker container

I have a basic distributed system that calculates the average cyclomatic complexity across all commits of a Github repo. It consists of a Master node running on Flask that gives SHA ID’s to a set of Worker nodes when they send a GET request to the Master.

I was trying to get some practice with Docker, and wanted to put my Worker code in a Docker container, then put that container on a set of remote machines and have the Master run on my local machine. I wanted the Workers and Master to be able to communicate with one another.

I’m not clear on how to go about doing this. I have tried telling the Worker nodes to send their requests to my local machines public IP and the port that the Flask server is running on (5000). I have mapped the port 5000 of the remote machine to port 5000 of the Docker container. That is about the extent of my knowledge here. I’ll attach some relevant code snippets below, without copy pasting the entire program.

Master setup:

if __name__ == '__main__':
    get_commits()
    TOTAL_COMMITS = len(JOB_QUEUE)
    app.run(host='0.0.0.0', port=5000, debug=False)
    print('n-----Shutting Down Server-----')
    calc_avg_cc()

Worker URLS:

master_url = 'http://<my public ip>:5000/'
node_setup_url = 'http://<my public ip>:5000/init'

————————————-Edit———————————

As @odk and @jonasheinisch suggested, it was in fact a network error. I was deploying my docker container on a remote machine that was on my university network. I spun up two AWS instances, one for Master and one for Worker’s and hey presto it worked.

I’ve updated the setup now, and will provide a bit more context as requested. One thing that I am still unsure of is exactly how my requests from my Worker are making it to my Master. I have mapped port 80 on the host to port 80 on the Docker container, as shown in my Dockerfile below. I chose port 80 as a bit of a guess, because I know that it is associated with HTTP traffic.

Dockerfile:

# Use an official Python runtime as a parent image
FROM python:3.5-slim

# Set the working directory to /worker
WORKDIR /worker

# Copy the current directory contents into the container at /app
ADD worker.py requirements.txt github-token ./

# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Run app.py when the container launches
CMD ["python", "worker.py"]

New Worker setup:

master_url = 'http://<public IP of my AWS Master Instance>:5000/'
node_setup_url = 'http://<public IP of my AWS Master Instance>:5000/init'

Master is still listening on all interfaces on port 5000. How exactly do my requests (sent from the Python request library) make it outside the Docker container to the Internet?

Advertisement

Answer

About main problem: As you confirmed it was a network connectivity issue. Case closed. About your edit: As the workers are the ones that initiate the connection (at least from your description) you don’t need to expose any port on those containers. Docker sets up NAT using iptables for container egress traffic (you can verify this on Linux host using iptables -L as root). It’s same as in your home network, you probably have some small router that does NAT for your home. And you don’t need to open any ports if you want to access any site on internet. You need to expose ports only when something in your container actually listens on it and should receive ingress traffic. Hope this is more clear now.

User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement