Docker Unixhttpconnectionpool(Host='localhost', Port=none): Read Timed Out. (Read Timeout=60)

tried this

            consign DOCKER_CLIENT_TIMEOUT=120 export COMPOSE_HTTP_TIMEOUT=120                      

and it seems to set up the issue for now

Other solutions people mentioned in this thread:

  • Restart Docker
  • Increase Docker CPU & memory

I'm seeing exactly the same trouble with latest beta for Mac. Same error if I run docker-compose create

Could this exist related to having one very big layer in the prototype? (a very lengthy npm install operation that takes nigh a infinitesimal to be flattened into a layer when docker builds the prototype)

We are also seeing this result using a docker compose file with half-dozen containers [docker-compose version ane.8.one, build 878cff1] on both windows and mac [Version 1.12.2-rc1-beta27 (build: 12496)
179c18cae7]

Increasing resources available to docker seems to reduce the chance of it happening (as does extending the timeout vars) , but its never eliminated.

We also accept some large-ish layers (240MB is the largest, the main package install command) and we are binding to a host directory with 120MB of files across a couple of containers.

From different attempts at working around this, I found something that might shed some calorie-free on a possible ready:

At first my scenario looked a bit like this:

            app:   build: .   volumes:     - ${PWD}:/usr/src     - /usr/src/node_modules                      

My mounted path included many directories with big, static files that I didn't really need mounted in terms of code reloading. So i ended up swapping for something like this:

            app:   build: .   volumes:     - ${PWD}:/usr/src     - /usr/src/static  # large files in a long dir structure     - /usr/src/node_modules                      

This left out of the runtime mounting all my big static files, which made the service first mode faster.

What I understand from this is: the more files you lot mount, especially the larger they are (images in the MBs instead of source files in the Bs/KBs), loading times go upwards by a lot.

Promise this helps

+1
I am seeing this timeout event every single week, usually after an idle weekend, while I was trying to connect to me containers, it timed out...
I have to terminate the running docker proc and restart it to work around....

+ane
Information technology happens to me every time I endeavor to restart the containers considering they are non responding anymore after a 24-hour interval. I'chiliad not sure if my example has to do with the mounting since I am trying to cease the containers.

Happinging with a nginx conatiner, Up 47 hours.
Docker for mac Version 17.03.1-ce-mac12 (17661) Channel: stable d1db12684b.

            version: '2.1' services:   nginx:     hostname: web     extends:       file: docker/docker-compose.yml       service: nginx     ports:       - 80:80       - 443:443     volumes:       - ./src:/var/world wide web:ro    php:     build:       dockerfile: "./docker/web/php/Dockerfile"       context: "."     volumes:       - ./src:/var/www                      
            $ docker-compose kill nginx Killing project_nginx_1 ...   Error: for project_nginx_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60) ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information. If yous run across this upshot regularly because of boring network atmospheric condition, consider setting COMPOSE_HTTP_TIMEOUT to a college value (current value: 60).                      

Thanks @gvilarino, I believe the large files mounting is the cause of this consequence on my linux server. Your snippet could be a workaround if the big files are not needed in container.

All the same, I wonder why mounting is irksome in Docker? Maybe it triggers disk copy? Just why?

@cherrot I wouldn't say I'm extremely proficient in the subject area, but I believe this has to exercise with the storage driver used by Docker and how information technology works internally for keeping layers in order. Employ docker info to encounter what storage driver your daemon is using (probably aufs, which is the slowest) and depending on your host OS, you may alter it so something else (overlay being a better selection, if supported). In that location are faster alternatives like LCFS simply they aren't commercially supported past Docker and so you'd be on your own there.

We are as well seeing this time-out. Information technology seems as well due to the volumes we are using.

We demand some containers to admission some SMB network shares. Then we mounted those share on the host organization, and demark-mounted them inside the container. But sometimes the advice between the Windows Server and our Linux host is stalled (run across https://access.redhat.com/solutions/1360683) and this is blocking the starting or stopping of our container which only time-out after awhile.

I do not have a set up notwithstanding. I'm looking for a volume plugin which back up SMB, or to make the stall communication problem on SMB going abroad. but no real solution yet.

FWIW: For the people landing here through search engine finding their resolvement, I've been able to fix this simply by the _did y'all try turning it off and on again?_ method; I've restarted my Docker Mac Bone client.

+ane on that, I am running stress testing on my instance which runs 4 containers and docker hangs even for docker ps -a so i'grand trying to restart the containers just i am getting
UnixHTTPConnectionPool(host='localhost', port=None): Read timed out and

            Traceback (most contempo phone call last):   File "/usr/bin/docker-compose", line 9, in <module>     load_entry_point('docker-compose==1.8.0', 'console_scripts', 'docker-etch')()   File "/usr/lib/python2.7/dist-packages/compose/cli/main.py", line 61, in principal     control()   File "/usr/lib/python2.7/dist-packages/compose/cli/main.py", line 113, in perform_command     handler(command, command_options)   File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__     cocky.gen.throw(type, value, traceback)   File "/usr/lib/python2.7/dist-packages/etch/cli/errors.py", line 56, in handle_connection_errors     log_timeout_error() TypeError: log_timeout_error() takes exactly 1 statement (0 given)                      

But if im restarting the docker service it seems to exist resolved, any ideas?

+1

`Restarting web-jenkins_jenkins_1 ...

ERROR: for web-jenkins_jenkins_1 UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=130)
Mistake: An HTTP request took too long to consummate. Retry with --verbose to obtain debug data.
If yous come across this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 120).`

i restart docker, it solved. simply every day i need to restart

restarting Docker works to me.

+ane restarting docker worked for me every bit well.

@rodrigo-brito - I've been getting this error for a little while now and restarting docker deamon have been solving the issue - no more since I added some other service to my projection.

I accept the same problem, merely I have a adequately simple setup.
I've only one verdaccio 3 container based on an image with 164 MB in size.
This is very disappointing :/

I'm using a MBP Pro 13 from 2015

Happened to me because of a large port range, it actually creates 1 dominion per port....

A uncomplicated sudo service docker restart solves this for me consistently every time it occurs.

Simply happened to me as well, in my case docker-compose push button (not even trying to run the app) on Azure DevOps.

My other builds practice not use docker-compose but apparently docker button

I removed the kubuntu 18.04.1 docker.io version of docker and installed docker-ce 18.09.0
Problem went away.

I just converted the docker-compose push step into individual pushes instead.

Nosotros're seeing this timeout when running a container via docker-compose or via the docker-py library (times out fifty-fifty after we bump the timeout to 2 minutes); however, we don't come across the error when we run via the Docker CLI (container starts instantly). Nosotros also simply see the issue on a Linux CI server and not on our Macs. We're working on building out a minimal reproducible example.

Having this effect with a docker-compose kill on a debian VM on macos host, install directly from docker. (Docker version 18.09.0, build 4d60db4)

I had the aforementioned error when starting docker with log-driver: syslog when rsyslog port was unavailable.
Error starting container 0ba2fb9540ec6680001f90dce56ae3a04b831c8146357efaab79d4756253ec8b: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=lx)

restarting Docker works to me.

@rodrigo-brito restarting is not a solution...

Happened to me considering of a large port range, information technology actually creates i rule per port....

Exact same affair for me. After the error, docker daemon go on to consume retention until depletion. I need to systemctl end docker before my system dice. (Docker version 18.09.3, build 774a1f4)

                          ports:       - "10000-20000:10000-20000"                      

simple restart of docker solved this for me...

It seems the issue is still present in recent docker-ce versions. I'm starting ~v containers, with the irksome 1 having a docker book mount that's pointing to a NFS share. No containers betrayal any port, did somebody figure out if this is a valid error (port=None seems suspicious)?

~~~
Client:
Version: 18.09.5
API version: 1.39
Get version: go1.10.viii
Git commit: e8ff056dbc
Congenital: Thu Apr 11 04:44:28 2019
OS/Arch: linux/amd64
Experimental: simulated

Server: Docker Engine - Community
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Become version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:10:53 2019
OS/Curvation: linux/amd64
Experimental: false
~~~

Added some more output from --verbose. I don't think there's anything of use hither, it merely says for a long fourth dimension that some container create operation is waiting for a long time. Apparently it'due south using polling, as the following message is printed virtually 1x/sec:

~
compose.parallel.feed_queue: Awaiting: set()
~

The localhost / port=Node is a bit of a ruddy herring I remember, equally the connexion is washed with docker.sock, so it's not some nil error hidden away somewhere. I think this will need to be tracked down inside docker, non that docker-etch handling of this request here is optimal.

Docker-compose seems to be missing some sort of request id which could be logged, so nosotros would know which request is stalling. For example, I know that my api container wasn't able to be created inside the timeout, but the request log isn't helping at all. Maybe somebody else can add some info hither:

~~~
urllib3.connectionpool._make_request: http://localhost:None "Mail service /v1.25/containers/create?name=api-memcache HTTP/1.1" 201 90
urllib3.connectionpool._make_request: http://localhost:None "Mail /v1.25/networks/proxy/disconnect HTTP/1.ane" 200 0
compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f',
'Warnings': None}
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f')
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('ba67095c5ea718af13a09798bc2f5ab24f5d0b54ce684b6f4cb248ab705df900', 'proxy', aliases=['redis', 'ba67095c5ea7'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
urllib3.connectionpool._make_request: http://localhost:None "Get /v1.25/containers/22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f/json HTTP/1.1" 200 None
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/create?name=api HTTP/1.1" 201 90
compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec',
'Warnings': None}
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec')
compose.parallel.feed_queue: Pending: fix()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/one.ane" 200 0
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.i" 200 0
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> JSON...
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f', 'proxy')
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('7d81ef23610f1b8f7ac95837cbf6c9eef977b5b0846fea24be5c7054e471df39', 'proxy', aliases=['comments', '7d81ef23610f'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> JSON...
urllib3.connectionpool._make_request: http://localhost:None "Postal service /v1.25/containers/create?name=api-comments-db HTTP/one.1" 201 90
compose.cli.verbose_proxy.proxy_callable: docker start <- ('ba67095c5ea718af13a09798bc2f5ab24f5d0b54ce684b6f4cb248ab705df900')
compose.parallel.feed_queue: Awaiting: set()
compose.parallel.feed_queue: Pending: set()
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec', 'proxy')
etch.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af',
'Warnings': None}
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.1" 200 0
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/i.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af')
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.parallel.feed_queue: Pending: set()
etch.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f', 'proxy', aliases=['memcache', '22b774d0451c'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af/json HTTP/1.one" 200 None
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.ane" 200 0
etch.cli.verbose_proxy.proxy_callable: docker beginning <- ('7d81ef23610f1b8f7ac95837cbf6c9eef977b5b0846fea24be5c7054e471df39')
etch.parallel.feed_queue: Pending: set()
etch.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> JSON...
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec', 'proxy', aliases=['api', '1b67251d4941'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af', 'proxy')
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker start <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f')
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.1" 200 0
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.one" 200 0
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af', 'proxy', aliases=['ff8c5cc4cb87', 'comments-db'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.cli.verbose_proxy.proxy_callable: docker start <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec')
urllib3.connectionpool._make_request: http://localhost:None "Mail service /v1.25/networks/proxy/connect HTTP/i.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker showtime <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af')
compose.parallel.feed_queue: Awaiting: gear up()
compose.parallel.feed_queue: Pending: set()
etch.parallel.feed_queue: Pending: gear up()
etch.parallel.feed_queue: Pending: set()
...
-- omitted ~30 lines
...
Creating api-comments ... done
compose.cli.verbose_proxy.proxy_callable: docker start -> None
compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='api', service='comments', number=1)
compose.parallel.feed_queue: Pending: fix()
compose.parallel.parallel_execute_iter: Finished processing:
etch.parallel.feed_queue: Pending: set up()
Creating api-memcache ... done
urllib3.connectionpool._make_request: http://localhost:None "Mail service /v1.25/containers/22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f/kickoff HTTP/ane.1" 204 0
compose.cli.verbose_proxy.proxy_callable: docker start -> None
compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='api', service='memcache', number=1)
compose.parallel.feed_queue: Awaiting: fix()
compose.parallel.parallel_execute_iter: Finished processing:
etch.parallel.feed_queue: Pending: fix()
compose.parallel.feed_queue: Pending: ready()
compose.parallel.feed_queue: Pending: ready()
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "Mail service /v1.25/containers/ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af/start HTTP/ane.1" 204 0
compose.cli.verbose_proxy.proxy_callable: docker get-go -> None
Creating api-comments-db ... done
etch.parallel.feed_queue: Awaiting: fix()
compose.parallel.parallel_execute_iter: Finished processing:
etch.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: ready()
etch.parallel.feed_queue: Awaiting: ready()
compose.parallel.feed_queue: Awaiting: set()
compose.parallel.feed_queue: Pending: ready()
-- omitted ~15 lines
Creating api-redis ... washed
compose.parallel.feed_queue: Pending: set up()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/ba67095c5ea718af13a09798bc2f5ab24f5d0b54ce684b6f4cb248ab705df900/start HTTP/1.1" 204 0
compose.cli.verbose_proxy.proxy_callable: docker outset -> None
etch.parallel.parallel_execute_iter: Finished processing: ServiceName(projection='api', service='redis', number=1)
compose.parallel.feed_queue: Pending: fix()
compose.parallel.parallel_execute_iter: Finished processing:

compose.parallel.feed_queue: Awaiting: gear up()

-- omitted 100+ lines
compose.parallel.parallel_execute_iter: Failed: ServiceName(projection='api', service='api', number=1)
etch.parallel.feed_queue: Pending: ready()

Error: for api UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
etch.parallel.parallel_execute_iter: Failed:
compose.parallel.feed_queue: Pending: fix()

Mistake: for api UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
etch.cli.errors.log_timeout_error: An HTTP asking took too long to complete. Retry with --verbose to obtain debug information.
If you meet this result regularly because of slow network atmospheric condition, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: threescore).
~~~

@titpetric can confirm I'g also having this event.

IMHO this issue is on docker side, not on docker-compose side. Somebody should turn on debug logging on the docker deamon and pin bespeak out the delays there, and file an upshot upstream. I'1000 not sure one might reproduce this easily without that.

If someone is willing to put in the time, I'd suggest to replicate this by creating a fully loaded folder for a volume mountain (something with about a 100000+ files/folders should do), to see if a reliable reproduction of the issue can exist achieved. It's likely that the docker daemon, or the kernel bind mount itself, caches some of the inode data beforehand. Which... is unfortunate.

A tcpdump might too confirm this in case of a network filesystem (samba, nfs).

Same exact error here

            Mistake: for docker_async_worker__local_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=seventy)  ERROR: for docker_elasticsearch__local_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=lxx)  Mistake: for docker_web__local_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=70)                      

Docker restarting too stock-still it for me.

Restart is not a ready guys.....
How to avoid this for adept?

Facing the aforementioned issue. Getting below error for all docker containers of the system peers :

Fault: for DNS UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

Is it considering of some port mismatch or assignment in the compose file?

Yes, constantly running into this issue myself. I agree restarting is not a solution, but nothing else seems to exercise the play tricks :/

I'thou also facing this issue :(
UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

Same issue here, too restarting Docker actually hangs. The only way is to impale Docker/or restart simply that can't be the solution.

@bitbrain yup this has been happening to me as well for quite some time.

I found a great solution to this (on MacOS)

The reason why this kept happening to me was that Docker had to little memory available.

Screenshot 2019-10-04 at 15 33 54

Increasing the retentivity from 2GB upwards to 8GB solved the issue for me.

I was getting this error later running docker-compose up and then docker-compose down a couple times. I tried everything in this thread. Bumping the resources, restarting my mac and reinstalling the latest Docker. I could become docker-etch upwards running over again after rebooting my box but after cycling those commands a few times it would get back to this error and I couldn't go docker-compose up to run.

My issue appears to have been a conflict with some other service (pow) that was binding to port 80 when ane of my containers was too binding to port 80. I uninstalled prisoner of war and have not had a problem for three days now.

3 years open up this ticket and yet unresolved. The problem however occurs fifty-fifty if we increase the client connection to 120 sec.

only happend to our server, open up upshot since 2016, wtf

restarting Docker works to me.

@rodrigo-brito restarting is not a solution...

my homo.

Also experiencing this now. Wild.

Take same issue when trying docker-compose upward or docker-compose down. I solved it by stopping mysqld service and in one case container is upwardly, I start mysql. RAM is at 20% usage.

Running Docker Desktop Community for Mac v2.1.0.5

I ran into this issue and solved by increasing the amount of memory allocated to Docker (and decreasing the amount of CPUs).
You could practise this in Docker -> Preferences -> Advanced.
I went from 8 CPUs & 2GB RAM to 4 CPUs & 16GB RAM for my particular setup.

Ran into this issue on Ubuntu Server 18.04 LTS. Restarting docker doesn't ready the problem, as well setting the environment variables. Whatever ideas?

@bpogodzinski take you tried to increase your Memory settings in Docker? I increased them from 2GB up to 8GB and that stock-still the problem for me.

More often than not speaking, this issue seems to happen when the containers crave more memory than the configured available memory in Docker so stuff simply hangs.

We had this issue and it appears (for u.s.) to be related to a named volume with a lot of files. I don't understand it, but it is the example for united states that a docker-compose (edited for brevity) that has a service:

                          serviceA:         ...         volumes:             - serviceA_volume: /srvA/binder     volumes:        - serviceA_volume:                      

Inside the Dockerfile for serviceA is the seemingly harmless and ineffectual command:

            ... RUN mkdir -p /srvA/folder && chown -R user /srvA/folder ...                      

Notice that this changes the owner recursively in /srvA/folder which in the named volume is a big filesystem with 100K's of files. Withal, this happens when the image is congenital and that binder is empty. It appears using the named volume inherits the permissions of the paradigm local file and then gain to change the named volumes permissions.

This is pretty border and probably not the same problem everyone else is having but it was our problem (toggling the line toggles the error). The result is that this http timeout is probably resulting from multiple causes.

Restarting docker never solved the consequence in my instance, increasing the resources definitely did.

From my experience this trouble oft arises on small cloud instances where the amount of RAM is perfectly fine during regular functioning just proves insufficient during docker or docker-compose operations. You lot could easily increase the RAM, merely it would probably drastically increase the cost of a pocket-sized VM.

In each case, adding a swap partition or even a bandy file solved this event for me!

Just occured to me on a raspberry pi. No volume with huge amount of files or annihilation.
Really I've been spawning these containers on that raspberry for a while now (a yr or two lol).
Not sure what changed.
Seems a flake "out of the blue".

Problem still appears on docker desktop 2.2.0.3 on MacOs 🙁

I resolved my issue with the post-obit commands:
docker book prune
docker system clip
(only i of these commands might be plenty, but cannot reproduce for the moment...)

I resolved my issue with the following commands:
docker volume prune
docker arrangement prune
(only 1 of these commands might be enough, merely cannot reproduce for the moment...)

@amaumont 'southward solution helped, although I think this would continue coming back overtime.
As everyone else has said, restarting docker is not a proper solution, it's a bandaid.

We are having multiple issues with docker-compose, too.

After setting MaxSessions 500 in sshd_config (see #6463) nosotros at present likewise get read time outs.
Setting both timeouts to 120 seconds resolved the outcome for the next DOCKER_HOST=xxx@yyy docker-etch up -d run.

During the 2nd run the car load went as high as xxx (sic!) earlier the docker-compose command failed due to timeouts. A docker restart did does non solve this trouble, non fifty-fifty temporarily.
The Server is an AWS EC2 instance with enough CPU/Disk/NetIO etc, the compose file includes 1 traefik and iii services with mailhog, so cipher special hither. Running docker-compose up -d with the aforementioned docker-compose.yml file directly on the server works reliably and as expected.
Running with --verbose shows over thousand consecutive lines containing compose.parallel.feed_queue: Awaiting: set().

I will try to rsync the docker-etch file to the remote server and run docker-compose directly on that machine every bit a workaround.

For me, it helped to simply restart docker.

Happens pretty often for me when trying to push to my private registry from bitbucket pipelines. Works well when pushing from local PC tho.
Restarting docker could help for a while, yet this "while" lasts for 10 min max :c

upd. setting DOCKER_CLIENT_TIMEOUT and COMPOSE_HTTP_TIMEOUT seemed to help, but I don't know for how long

I started getting these since switching to Docker Edge with Caching on

This has been happening pretty consistently for me since I started using Docker 2-3 years ago. After a container has been running for a while, it becomes a zombie and the entire Docker engine needs to exist restarted for things to become responsive again. This feels like a resources leak of some kind, since idle time seems to exist very relevant for the experienced behaviour.

If no containers are running, or they only run for a short amount of fourth dimension, everything seems to be working fine for days or weeks. But every bit soon as I let a container run for a few hours, it becomes unresponsive, I accept to forcefulness-stop it in the command line and any endeavor at communicating with docker or docker-etch but fails with a timeout. A restart is the just working solution.

Output of docker-etch version

            docker-etch version 1.25.v, build 8a1c60f6 docker-py version: 4.1.0 CPython version: 3.seven.5 OpenSSL version: OpenSSL ane.1.1f  31 Mar 2020                      

Output of docker version

            Client: Docker Engine - Community  Version:           19.03.8  API version:       1.40  Go version:        go1.12.17  Git commit:        afacb8b  Congenital:             Wed Mar eleven 01:21:xi 2020  OS/Arch:           darwin/amd64  Experimental:      false  Server: Docker Engine - Community  Engine:   Version:          nineteen.03.viii   API version:      1.twoscore (minimum version one.12)   Go version:       go1.12.17   Git commit:       afacb8b   Built:            Wednesday Mar 11 01:29:16 2020   OS/Curvation:          linux/amd64   Experimental:     false  containerd:   Version:          v1.two.xiii   GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429  runc:   Version:          1.0.0-rc10   GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd  docker-init:   Version:          0.18.0   GitCommit:        fec3683                      

Output of docker-compose config

            services:   portal:     container_name: developer_portal     image: swedbankpay/jekyll-plantuml:1.three.8     ports:     - published: 4000       target: 4000     - published: 35729       target: 35729     volumes:     - .:/srv/jekyll:rw     - ./.package:/usr/local/bundle:rw version: '3.7'                      

macOS Mojave 10.14.6.

I faced same effect, fifty-fifty I increased resource from 4GB RAM, 1GB bandy to 6GB RAM, 2GB bandy.

I am likewise facing the same issue

I've been facing the same consequence on Ubuntu 18.04 LTS (8 GB RAM) using HTTPS.

I'm able to spawn containers with docker-compose up, however once deployed I'm unable to stop containers with docker-compose downward. Restarting the docker daemon or rebooting the VM take proven to be ineffective. Adding timeout environment variables (DOCKER_CLIENT_TIMEOUT, COMPOSE_HTTP_TIMEOUT) also didn't practise annihilation.

I'one thousand able to interact with and stop containers individually, I tin inspect containers, attach to them, and anything else, merely I cannot stop or kill them using docker-compose control.

The mistake bulletin is ever the same:

            msg: 'Error stopping project - HTTPSConnectionPool(host=[ommited], port=2376): Read timed out. (read timeout=120)                      

I was having the same outcome when I had the post-obit in my docker-compose.yml:

            logging:       driver: "json-file"       options:         max-size: 100m         max-file: 10                      

The mistake was gone when I added quotes around "10". This is stated in docs that the values for max-file and max-size must be string, just still. The error message is quite misleading.

I was having the aforementioned issue when I had the post-obit in my docker-compose.yml:

              logging:       commuter: "json-file"       options:         max-size: 100m         max-file: 10                          

The fault was gone when I added quotes around "10". This is stated in docs that the values for max-file and max-size must be string, simply still. The fault bulletin is quite misleading.

Y'all save my mean solar day. Give thanks you so much!

I was having the same outcome when I had the following in my docker-compose.yml:

              logging:       driver: "json-file"       options:         max-size: 100m         max-file: x                          

The mistake was gone when I added quotes around "ten". This is stated in docs that the values for max-file and max-size must be string, but still. The error message is quite misleading.

I'm configuring the logging driver at docker daemon level. I'm using fluentd every bit my logging-driver, so unfortunately this set up won't work for me. =/

tried this

              export DOCKER_CLIENT_TIMEOUT=120 export COMPOSE_HTTP_TIMEOUT=120                          

and information technology seems to fix the issue for now

Other solutions people mentioned in this thread:

  • Restart Docker
  • Increment Docker CPU & retention

Well, zippo worked for me, except the timeout option, kudos to you.

I'm getting this since I started to apply an NFS mounted directory inside one of my containers. That NFS mounted directory is on a slow link (in a remote location that has a low bandwidth connection). Could that be the problem?

I'm experiencing this very frequently on Mac, Docker 2.4.0.0, in two different projects with different docker-etch.yml configs. I don't recall it ever happening earlier ~i week ago which is when I upgraded to ii.iv.0.0. Is in that location a regression?

I've tried increasing the timeout to 600, increasing RAM to 16GB & swap to 4GB, restarting Docker, restarting my entire Macbook, null seems to piece of work, except randomly trying once more and once more so it will occasionally work. Just and so the next fourth dimension I need to restart or rebuild a container, same trouble :(

Started seeing this with ii.four.0.0 on Mac as well. Workaround for me is to restart docker only will run into information technology again after.

Same here! With update to 2.4.0 our setups sometimes do not start at all with the mentioned Read timed out. errors, sometimes merely some containers start upwards, others throw this error. I am already thinking about a downgrade!

Just to mention: This issue affects both setups using NFS shares as well as projects using "normal" mounted volumes

Same issue here, also on mac and after the 2.four.0 update. I'm currently trying if downgrading helps.

Update: downgrading to the previous version, deleting cache and rebuilding fixes the issue.

I likewise recently started seeing this consequence (Mac, 2.4.0.0), when I never saw it before. Running docker image prune fabricated the problem become away for a couple of days, but now it's back over again.

Also started having frequently this timeout error since the two.iv.0 update (on Mac Bone Mojave ten.xiv.5)

Also seeing this with increased frequency since updating to Docker Desktop 2.4.0.0 (48506) on MacOS Catalina.

I go the same timeouts issues since 2.four.0.0 on Mac OS. I never had this issue before.
I tried the edge build 2.four.i.0 (48583) but I still have the same result.

I got aforementioned consequence and rebooting docker fixed it for MacOs Catalina(x.xv.v) and docker version ii.4.0.0

Same here, didn't have the trouble earlier updating to Docker desktop 2.4.0.0.
Restarting Docker desktop works, but it'due south just a workaround.

Same here, besides starting with v2.4.0

Update: downgrading to the previous version, deleting cache and rebuilding fixes the result.

Volition endeavor that. Not fifty-fifty sure how it's done. I assume it's past uninstalling and downloading an before version?

Yup, tin can ostend it made the difference for me too. Definitely v2.4 is to arraign hither somehow.

If you see this issue regularly considering of slow network weather condition, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: sixty).

How 1Gbps is a slow network, exactly?

Downgrading worked for me besides. For those managing Docker via Homebrew

            mash uninstall docker brew install https://raw.githubusercontent.com/Homebrew/homebrew-cask/9da3c988402d218796d1f962910e1ef3c4fca1d3/Casks/docker.rb                      

If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

How 1Gbps is a slow network, exactly?

In my case this happened due to an NFS mounted network drive.
The "slow" network speed's root cause was the apply of NFS not the physical link speed.
But information technology definitely shows at that place is a problem in the implementation and I would be surprised if changing HTTP_TIMEOUT will solve it.

Same here. Pregnant slowdown in container creation, resulting in the aforementioned HTTP timeout mistake on Docker for Mac v2.4. Setting COMPOSE_HTTP_TIMEOUT=120 worked, but the container cosmos slowness is still a new issue. Downgrading to v2.3 also fixes this.

I can confirm the same problem since I installed on Docker for Mac v2.four.
I can also confirm a pregnant increase of RAM and CPU consumption fifty-fifty in idle moments, just with Docker daemon running. But I guess it has nothing to with etch package itself.

This trouble still exists in Docker v. ii.4.three.0.

I've too downgraded to 2.iii from two.4 to workaround the massive slowness issues in the 2.4 release. Happy to provide any logs might be useful to debug what'south going on here.

Echoing the above, this started happening in 2.four.ii.ten for me. Something changed in the upgrade from 2.3.

I fabricated some examination in a Linux enviroment, and had a similar problem. I installed the latest docker-compose binary (v1.27.iv) and had the aforementioned timeout trouble you guys are reporting.

After downgrading to 1.27.2, the aforementioned available in Docker for Mac 2.three, the trouble has disappeared.

Same issue with the current version on Ubuntu twenty.04.

I'm still experiencing frequently timeout errors since the 2.four.0 update that are still not stock-still in ii.v.0

Yep, aforementioned here. It was working fine for me for the past two years. Simply 2 months agone suddenly when e'er i desire to ane instance andd start another docker project it throws :
for apache UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.

Restarting Docker fixes the event. But is a real hurting when i take to switch between projects multiple times in ane day

Striking same outcome since 2.4, 300% cpu at all times, ii.5 didn't help, downgraded back to two.iii and things are okay. This on latest macbook w/ i7 cpu and 32g ram

I've just upgraded to last Docker for Mac version (v2.5.0.1) and the problem seems to be solved.
No more UnixHTTPConnection mistake, and no more than 100% CPU use.

Not sure if anyone else can ostend that.

How did yous get that? Opening Docker on Mac and doing "Check for Updates" nonetheless says I have the latest, ii.iv.2.0.

I've just upgraded to last Docker for Mac version (v2.v.0.one) and the trouble seems to be solved.
No more UnixHTTPConnection error, and no more 100% CPU use.

Not certain if anyone else tin confirm that.

I just experienced the consequence on v2.5.0.1. Restarting docker seems to (at least temporarily) resolve the issue.

How did you get that? Opening Docker on Mac and doing "Cheque for Updates" still says I have the latest, 2.4.2.0.

I cannot show you whatever screenshot since I already upgraded, but I think you may have some trouble getting updates from your estimator, since there have been a previous v2.5.0 version available for more than than a week.

Y'all can check it in the Docker for Mac release notes (and take hold of any new installer from there).

I'm running Edge. That probably explains information technology.

Tin can confirm that v2.five.0.1 is at least marginally amend. Not getting timeouts at every boot anymore, and haven't run into it yet since updating this forenoon. Container boot time still seems much slower than 2.3, though.

Edit: just ran into the timeout errors again, after almost 4 or 5 restarts of my docker-etch project. Too ran into a new error with 2.5.0.ane: Cannot start service <container proper name>: error while creating mountain source path <local mount path>: mkdir <local mount path>: file exists

Tin can confirm that v2.5.0.1 is at least marginally better. Non getting timeouts at every boot anymore, and haven't run into it notwithstanding since updating this morning. Container boot time still seems much slower than two.3, though.

Edit: merely ran into the timeout errors once again, after nearly 4 or 5 restarts of my docker-compose projection. As well ran into a new error with 2.5.0.1: Cannot start service <container name>: error while creating mountain source path <local mountain path>: mkdir <local mount path>: file exists

OK, I'm besides yet facing some bug with ii.5.0.1 version. CPU usage is notwithstanding besides high compared to v2.3.x, and the speed is likewise pretty tiresome.

Can anyone from the Docker team acknowledge and counterbalance in on this?

God,4 years passed,this upshot still not solved,and it happens to me all the time

oldhamarld1937.blogspot.com

Source: https://bleepcoder.com/compose/175983352/unixhttpconnectionpool-host-localhost-port-none-read-timed

0 Response to "Docker Unixhttpconnectionpool(Host='localhost', Port=none): Read Timed Out. (Read Timeout=60)"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel