Setting up an Apt-Cache server inside a Proxmox LXC

This guide assumes you have Proxmox Virtual Enviroment 5.1 setup and running.

Obtaining LXC Template on Proxmox

LXC templates for Proxmox are able to be searched and downloaded directly from the pve shell with the


utility. After updating, we can print the list of available templates, and this list can be shortened by piping the output to


. I will be installing the Ubuntu 16.04 LTS template.

From your pve shell:

Creating the Linux Container (LXC)

In the upper right hand corner of your pve web interface, click the Create CT button. Because we are using an LXC, very few system resources need to be allocated.  Step through the tabs in the popup window and set the following items:

  • Hostname: Linux-Management
  • Password & Confirm Password
  • Template: ubuntu-16.04-standard_16.04-1_amd64.tar.gz
Root Disk
  • Disk size (GB): 2
  • Cores: 2
  • Memory (MB): 512
  • Swap (MB): 0
  • Unique for network setup
  • Ensure correct Bridge, VLAN Tag (if applicable) are selected
  • Set IPv4 to DHCP and add a static IP with the LXC’s MAC address to your router’s configuration
  • Unique for network setup
  • Finish

Upgrade newly created LXC

Start your LXC and enter the console.  If you are met with a blank screen, hit ENTER to display the login prompt.

Login with the username root and the password created during the setup dialouge. We will first create a non-root user, and add it the the sudoers group. Follow the on screen prompts to create a password and add optional extra information for the user. Then exit back to login prompt.

Login as your new user and upgrade the container:

Mount CIFS Share inside LXC

In order to mount a network share in our LXC, we need to first attach it to the pve host. Enter the pve shell, and create the directory for the mount location:

Next, add the network share to the end of the host’s



Please note that you may need to adjust the options to accommodate how your network share is configured.  Mount the network share with the following:

Next, we need to edit the LXC configuration file at


to mount the network share. Add the following to the end of the configuration:

The container needs to be rebooted in order to load the changes to the configuration file.

Setup apt-cacher-ng

From the LXC console, install the



Check the permissions of the folder with:

We will want to create a new group that matches the


of the folder.  The name of the group on my NAS is


, so I will be using that name here as well.  We will then add our account user and the


user to this group.

Setup the cache folder within the mounted network share:



as the superuser (aka


) with the editor of your choice. Set the cache storage location as follows:

Under the commented out


line, add the following:

Comment out distributions that you do not use on your network. For my configuration, I am keeping Ubuntu and Debian only.

Lastly, uncomment the PidFile line. Save the file and restart the


service by executing:

You should now be able to visit the Apt-Cacher NG maintenance page by visiting:


is the IP address of your LXC.

Fixing CacheDir Bug

As of writing this, changing the location of the cache directory in the configuration has no effect. A fix for this until an upsteam solution can be applied is to move all cached packages to a temporary location, link your preferred directory to the hard coded default directory, and move the cache to the new linked location. This can be done as follows:

The bug report can be found here: CacheDir setting ignored

Setup Clients

In order for your Debian based installations to take advantage of the newly created cache, they need to be directed to the server location. On each client (including the LXC hosting the cache), create the file


and add the following:

From this point on, all update requests are directed to our LXC. The LXC will then serve a cached version of a package or download and cache any new packages needed by your Debian systems.

About the Author: _gilroy

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.