There must be other users out there using Docker and I-MSCP on the same server. So, it must be very specific to my setup because others would have noticed that before.
Network connection lost during subdomain creation - I-MSCP stuck in process
-
- online support
- TheRiddler1982
- Thread is marked as Resolved.
-
-
After investigation, it appear that the problem is related to your docker daemon. Once the docker daemon is stopped, everything goes as expected with the control panel. I've no time to investigate further and this is not really i-MSCP related.
Note that docker daemon is now stopped. You should fix the configuration. There is surely a clash somewhere..
How did you find out that docker was the cause?
-
-
The problem appear clearly in systemd log:
Code- root@cloudserv111:~# service docker status
- ● docker.service - Docker Application Container Engine
- Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
- Active: active (running) since Mon 2019-09-09 18:33:53 CEST; 2min 43s ago
- Docs: https://docs.docker.com
- Main PID: 168039 (dockerd)
- Tasks: 40
- Memory: 83.6M
- CPU: 16.239s
- CGroup: /system.slice/docker.service
- └─168039 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
- Sep 09 18:33:59 cloudserv111 dockerd[168039]: time="2019-09-09T18:33:59.825403326+02:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:48
- Sep 09 18:33:59 cloudserv111 dockerd[168039]: time="2019-09-09T18:33:59.914487659+02:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.
- Sep 09 18:33:59 cloudserv111 dockerd[168039]: time="2019-09-09T18:33:59.914530858+02:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:48
- Sep 09 18:34:04 cloudserv111 dockerd[168039]: time="2019-09-09T18:34:04.339311254+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
- Sep 09 18:34:05 cloudserv111 dockerd[168039]: time="2019-09-09T18:34:05.580269850+02:00" level=warning msg="b82dc5bfc760eb931d528bb92dbe263d4e776dfd1a168a3d9984c09b4a09e70d cleanup: failed to unmount IPC: umount /
- Sep 09 18:34:05 cloudserv111 dockerd[168039]: time="2019-09-09T18:34:05.636678040+02:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=q7xdrg4802p4hb9i0hw
- Sep 09 18:34:05 cloudserv111 dockerd[168039]: time="2019-09-09T18:34:05.776777889+02:00" level=warning msg="failed to create proxy for port 9876: listen tcp :9876: listen: address already in use"
- Sep 09 18:34:06 cloudserv111 dockerd[168039]: time="2019-09-09T18:34:06.092390863+02:00" level=warning msg="failed to deactivate service binding for container wekan_main.1.rxmpgvj2q2ept5t9z3nkc9q4h" error="No such
- Sep 09 18:34:10 cloudserv111 dockerd[168039]: time="2019-09-09T18:34:10.871438637+02:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.
- Sep 09 18:34:10 cloudserv111 dockerd[168039]: time="2019-09-09T18:34:10.872150920+02:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:48
- root@cloudserv111:~# service docker stop
Specially that line:
That port 9876 is the one used by the i-MSCP daemon I don't really known what your docker is trying to do with that port (proxy ???) but... At least, you have a road for investigating now
-
Apologies for that. Now that I know the port :-):
It has been an issue with Portainer which has been victim of an attack on a specific port and they changed their proxy port to a new one to stop the attacks. That happened 3 days ago. A restart of the swarm updated the docker file and there was port 9876.
I would have never found the reason for that error! Thanks!
I was setting up Kibana today to get a better look on my logs, that was clearly a chicken and egg problem
-
-
Apologies for that. Now that I know the port :-):
It has been an issue with Portainer which has been victim of an attack on a specific port and they changed their proxy port to a new one to stop the attacks. That happened 3 days ago. A restart of the swarm updated the docker file and there was port 9876.
I would have never found the reason for that error! Thanks!
I was setting up Kibana today to get a better look on my logs, that was clearly a chicken and egg problem
You're welcome. Please mark the thread as solved