Monday, September 22, 2014

glpi entities expanded by default

ajax/entitytreesons.php

//   $path['expanded'] = isset($ancestors[$ID]);
     $path['expanded'] = true;

//   $path['expanded'] = isset($ancestors[$row['id']]);
     $path['expanded'] = true;

Friday, September 19, 2014

SmartOS: move (migrate) a VM to another Global Zone

To move a running virtual machine, either a zone or a KVM, from an hypervisor (Global Zone) to another, I usually follow these steps in an handicraft way.

In order to minimize downtime, there are 2 steps.
The first snapshot and transfer of the ZFS filesystem will be done without halting the VM.
In the second step we will shutdown the VM, take another snapshot, then we will send it using an incremental transfer, that is very fast.

First of all you need the UUID of the VM you want to move from a global zone to another GZ.

[root@gz1 /]#vmadm list
UUID                                  TYPE  RAM      STATE             ALIAS
561b686e-3119-4ab0-932e-20fc944fb001  KVM   512      running           vm1
e44f3c76-4acb-11e3-a536-a7cfa8b66838  OS    512      running           vm2
b803b5b9-bc86-4d0f-b450-16862e7bd7ed  OS    2048     running           vm3



Now we need the list of all the ZFS filesystem related to that VM.

[root@gz1 ~]# zfs list -o name | grep 561b686e-3119-4ab0-932e-20fc944fb001

zones/561b686e-3119-4ab0-932e-20fc944fb001
zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0
zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1
zones/cores/561b686e-3119-4ab0-932e-20fc944fb001


Now we must create a snapshot of every ZFS.
Note: at this time the VM doesn't need to be stopped, you can leave it up and running, just to minimize downtime.

[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend
[root@gz1 ~]# zfs snapshot zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend


Now it is time to send these snapshots to the destination. It may require a lot of time.

[root@gz1 ~]# zfs send zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend | ssh gz2.domain zfs receive -v zones/561b686e-3119-4ab0-932e-20fc944fb001[root@gz1 ~]# zfs send zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend | ssh gz2.domain zfs receive -v zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0[root@gz1 ~]# zfs send zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend | ssh gz2.domain zfs receive -v zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1
[root@gz1 ~]#
zfs send zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend | ssh
gz2.domain zfs receive -v zones/cores/561b686e-3119-4ab0-932e-20fc944fb001

After that, we have to stop the virtual machine.

[root@gz1 ~]# vmadm stop 561b686e-3119-4ab0-932e-20fc944fb001

And get additional snapshots of the ZFS filesystems, like before, giving them a different name. Now, these snapshots are potentially in a consistent state, since the operating system inside the virtual machine is not running.

[root@gz1 ~]# zfs snapshot zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last 
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend-last 
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend-last 
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last


Finally, we have to send these ZFS snapshots using incremental send. It will take a little bit of time.

[root@gz1 ~]# zfs send -i zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last | ssh gz2.domain zfs receive -Fv zones/561b686e-3119-4ab0-932e-20fc944fb001
[root@gz1 ~]#
zfs send -i zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend-last | ssh
gz2.domain zfs receive -Fv zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0
[root@gz1 ~]#
zfs send -i zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend-last | ssh
gz2.domain zfs receive -Fv zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1

[root@gz1 ~]# zfs send -i zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last | ssh gz2.domain zfs receive -v zones/cores/561b686e-3119-4ab0-932e-20fc944fb001

Some last operations.
Grab the line related to the VM from inside the /etc/zones/index in the source global zone, and paste it at the end of same file in the destination global zone.

[root@gz1 ~]# cat /etc/zones/index | grep 561b686e-3119-4ab0-932e-20fc944fb001


[root@gz2 ~]# echo "561b686e-3119-4ab0-932e-20fc944fb001:installed:/zones/561b686e-3119-4ab0-932e-20fc944fb001:561b686e-3119-4ab0-932e-20fc944fb001" >> /etc/zones/index

Finally, copy the xml configuration file from the source global zone, to the destination one.


[root@gz1 ~]#scp /etc/zones/561b686e-3119-4ab0-932e-20fc944fb001.xml gz2.domain:/etc/zones/561b686e-3119-4ab0-932e-20fc944fb001.xml 

At this point, you can boot the VM on the destination global zone, check if all is working as expected, then delete the old VM from the source global zone.

Wednesday, September 10, 2014

smartos pxe: operation not permitted

I have downloaded the latest SmartOS image (20140904T175324Z) suitable for PXE booting.
Unpacking such file, platform-20140904T175324Z.tgz, and following the steps like the ones described here, http://wiki.smartos.org/display/DOC/PXE+Booting+SmartOS, I stumbled in an error that prevent the server to boot. This problem had never happened with previous releases.

Operation not permitted (http://ipxe.org/410c613c)
Could not boot image: Operation not permitted (http://ipxe.org/410c613c)


Visiting the proposed link, and looking at Apache log file (I've configured iPXE to download the images via HTTP) I found where the problem was.

 [Wed Sep 10 xx:xx:xx 2014] [error] [client 192.168.56.123] (13)Permission denied: file permissions deny server access: /srv/tftp/images/smartos/20140904T175324Z/platform/i86pc/amd64/boot_archive


Untarring the image file, the permission on the boot_archive file was 600 and not 644 as expected, and as it has been up to now.