Run i-MSCP in a docker container

  • Hello all


    Has anyone tried to install i-MSCP in a docker container?


    I know i-MSCP doesn't like to be behind a NAT, so there would have to be some manual adjustments to the installation.
    Also:
    - Data and logs would be in a separate data container or directly on the host.
    - DNS Server would need to be disabled.
    - The way services are restarted should be changed to align it the docker way.


    What other things do I need to look for?


    Thank you for your suggestion.

  • I don't know to much about Docker yet...


    But shouldn't things just work like on other VPS e.g. OpenVZ?


    Do you want to run thousands of iMSCP instances in parallel? The shared filesystem will only save you some megabytes for system libs. I could also see splitting things up in services, but iMSCP is not yet as multiserver as the name implies...


    - NAT can work just fine. Only check the FTP server settings.
    - filesystem should go seperate in any way ( meaning /var/mail and /var/ww ), depends on your hoster
    - Why disable DNS?
    - Generally I would think of using puppet or chef to control imscp configuration in the docker way


    Again, Docker seems to be to much overhead compared to the already existing VPSes... !?

  • It's more intended to get a stable and separated environment, separate from other things on the server. Until now, when I did something there was a chance, that I got somehow disrupted by imscp, or I disrupted imscp. (e.g. I didn't want that imscp manages chkrootkit, so I disabled it in imscp, but I still have it installed. On every update imscp uninstalls it for me)


    Second part, if I decide I need a new machine, or a new test instance, I can just "docker export" it, copy the image and all data over to the new machine and do "docker import".


    Third, a new update comes by. I make a backup of the imscp database and install imscp in a new docker container. Deactivate the old container, activate the new. All good? OK normal operation resumes. All bad? OK deactivate new container, activate old container and normal operation resumes.


    Fourth, security: If a virus finds it's way inside somehow. I can now assume that only the docker container and it's data is infected. I can scan the data now from outside, where the virus has no power. If shit really hits the fan I just "docker stop" the container and investigate.


    Again, Docker seems to be to much overhead compared to the already existing VPSes... !?


    Docker has practically no overhead. It's more like a chroot than openVZ/Virtualbox.


    - NAT can work just fine. Only check the FTP server settings.


    Yeah I just need to look that there's nowhere a static IP entry. atm, Docker changes the container IP on every start. What I did for now is to attach the container on the host interface, where I have a static IP. (so no NAT or anything)


    - filesystem should go seperate in any way ( meaning /var/mail and /var/ww ), depends on your hoster


    I have a hardware root server. That's what I did. Those files are separate, and are just linked into the host.


    - Why disable DNS?


    To have more options if the container is down. I don't host DNS myself by principle. Too much of a hassle.


    - Generally I would think of using puppet or chef to control imscp configuration in the docker way


    I'd love to. But the interactive way of imscp doesn't allow me to fully automate it.
    The Docker Way™ is btw. build once, run anywhere and never update a container (deploy a new one instead)



    I managed to install it, with suprisingly few hiccups. If somebody's interested I can share my configuration and changes to imscp.

  • Sorry, by overhead I didn't mean the technical part. I was refering to the configuration burden of having to dynamically configure things like the mentioned dynamic IPs and such. On a root server I still think OpenVZ is more advanced or say experienced compared to Docker. The tools scope simply is different :).


    Try running a (hidden) primary DNS inside imscp. Then customers can configure their domains easily through the webinterface. Many hosters provide secondary DNS for you.


    I think this CLI script could be something helpful: Command line tools for i-mscp.


    Sure, there aren't many hiccups in getting the product running inside docker. But I would not see this as a correct docker install. A correct docker install would launch automatically, grab the configuration from a central server and would finally start processes based on the configuration. This would in the end mean to happen dynamically e.g. load based.


    When thinking about docker, I'd think about one central imscp panel server and many server hosting email and websites. A proxy director managing users to these different servers and reacting on traffic needs, launching service instances as website/mail traffic demands. For now this looks like a very long way to go still. The imscp services aren't split yet.


    Don't get me wrong though, I welcome your efforts and future patches! For now the existing KVM, VZ (Proxmox in my case) tools are easier than (self hosted) cloud solutions (OpenStack and Amazon/Google/etc).

  • As I said, the IPs aren't a problem anymore. But you're probably right, if I'd do a normal installation I'd be up an running a long time ago. But I like to tinker and try new things, so...
    The reason why I don't want OpenVZ is that all other services are running fine with Docker and I don't want to have another tool to manage (maybe later ;) ).


    To this day none of my customers wanted custom DNS entries, most of them don't even know what it is.
    I think DNS should work, but I didn't test it. (Second reason: I have a dynamic DNS server with docker integration installed. I would need to integrate it in the bind/imscp config and again take care of the customisation)
    What I would like to have is API integration to the DNS providers. But this can wait^^


    Thanks for the link, but sadly it seems to be quite outdated. But it's ok, it's running fine^^


    Sure, there aren't many hiccups in getting the product running inside docker. But I would not see this as a correct docker install. A correct docker install would launch automatically, grab the configuration from a central server and would finally start processes based on the configuration. This would in the end mean to happen dynamically e.g. load based.


    When thinking about docker, I'd think about one central imscp panel server and many server hosting email and websites. A proxy director managing users to these different servers and reacting on traffic needs, launching service instances as website/mail traffic demands. For now this looks like a very long way to go still. The imscp services aren't split yet.


    As of now, I only have one physical server. So every container gets the configuration from the central server :) But yeah, I tought about running etcd in the future.
    Ya this is all possible. But I really want it mainly for the isolation. I think of all the other things when a website is actually that large and important.



    Don't get me wrong though, I welcome your efforts and future patches! For now the existing KVM, VZ (Proxmox in my case) tools are easier than (self hosted) cloud solutions (OpenStack and Amazon/Google/etc).


    Nah, I like Docker :D


    But thanks for all the feedback :)