Each new VM acquire its IP from DHCP server which running in head. Every IP is engaged current MAC address of VM. Before configuring DHCP serven in pool, its necessary make a mapping file that should contain IP, MAC pairs for every VM.
Easy code for this procedure:
for i in {100..150}; do ip="192.168.122.$i" mac="52:54:$(dd if=/dev/urandom count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4/')" echo $ip $mac >> macip_mapping.txt done
This macip_mapping.txt file must be copied to all pool's working directory. The dhcp.conf in head should contain same IP, MAC pairs that are in macip_mapping.txt file and options seen below:
option subnet-mask 255.255.255.0; option broadcast-address 192.168.122.255; option routers 192.168.122.3; option domain-name-servers 194.204.0.1, 192.168.1.199; option domain-name "hep.kbfi.ee"; subnet 0.0.0.0 netmask 0.0.0.0 { host vm00 {hardware ethernet 52:54:42:4a:78:04; fixed-address 192.168.122.100;} host vm01 {hardware ethernet 52:54:00:ae:0d:bf; fixed-address 192.168.122.101;} host vm02 {hardware ethernet 52:54:00:8a:e4:f5; fixed-address 192.168.122.102;} host vm03 {hardware ethernet 52:54:00:36:00:c1; fixed-address 192.168.122.103;} host vm04 {hardware ethernet 52:54:00:45:27:73; fixed-address 192.168.122.104;} ... ... }
For load balancing must enable apache proxy in head machine. Enabling necessary apache modules:
sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod proxy_balancer sudo a2enmod headers sudo a2enmod proxy_connect
Add some lines into /etc/apache2/mods-available/proxy_balancer.conf file (create this file if its not exist)
<IfModule mod_proxy_balancer.c> <Proxy balancer://vm-group> Include /home/cell/load_balancer_inc ProxySet lbmethod=byrequests </Proxy> ProxyPass /test balancer://vm-group ProxyPassReverse /test/ balancer://vm-group/test/ <Location /balancer-manager> SetHandler balancer-manager Order Deny,Allow Allow from all </Location> </IfModule>
The content of /etc/apache2/mods-available/proxy.conf file:
<IfModule mod_proxy.c> ProxyRequests Off <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyVia On </IfModule>
Now the load balancer's web interface is http://head_host/balancer-manager and web of VM's are http://head_host/test
All necessary scripts, templates and VM image are available in SVN. For downloading:
$ svn co svn+ssh://neptune.hep.kbfi.ee/home/ilja/svnroot/autoscale autoscale
Every pool must contain template catalog with VM templates (.xml files), image file (image.qcow2) and also scripts for creating/destroying VM-s (start_vm.sh and stop_vm.sh) and macip_mapping.txt file.
There are 3 types of templates:
start_vm.sh will make new VM with configure of selected template and copy all necessary files (VM conf and image) into directory with name of VM's ip. After that script will add new VM to the libvirt catalog and adjust pool's routing table as described here.
stop_vm.sh will destroy and undefine VM.
These scripts will be executed by head's scripts start.sh or stop.sh If these scripts will be executed independently then its necessary to add hosts into head's routing table. (See here)
Start and stop of VM must be done in head machine using scripts - start.sh and stop.sh
For example - starting new medium VM with ip 192.168.122.101 in pool 192.168.1.67 in head machine:
$ ./start.sh -i 192.168.122.101 -p 192.168.1.67 -t medium
This script will:
Stopping and destroying same VM:
$ ./stop.sh -i 192.168.122.101 -p 192.168.1.67
This script will: