SmartOS and percona-cluster-5.5.31nb3
Error in /var/log/mysql/error.log
WSREP_SST: [INFO] Streaming the backup to joiner at 192.168.0.5 4444 (20131210 11:16:26.461)
WSREP_SST: [ERROR] innobackupex finished with error: 3. Check /var/mysql//innobackup.backup.log (20131210 11:16:27.891)
WSREP_SST: [ERROR] Cleanup after exit with status:22 (20131210 11:16:27.912)
131210 11:16:27 [ERROR] WSREP: Failed to read from: wsrep_sst_xtrabackup --role 'donor' --address '192.168.0.5:4444/xtrabackup_sst' --auth 'sst:secret' --socket '/tmp/mysql.sock' --datadir '/var/mysql/' --defaults-file '/opt/local/etc/my.cnf' --gtid '6ceed930-6185-11e3-ae0d-8e7504b63f54:2'
131210 11:16:27 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup --role 'donor' --address '192.168.0.5:4444/xtrabackup_sst' --auth 'sst:secret' --socket '/tmp/mysql.sock' --datadir '/var/mysql/' --defaults-file '/opt/local/etc/my.cnf' --gtid '6ceed930-6185-11e3-ae0d-8e7504b63f54:2': 22 (Invalid argument)
131210 11:16:27 [Warning] WSREP: 0 (node1): State transfer to 1 (node2) failed: -1 (Not owner)
And in /var/mysql//innobackup.backup.log you see
xtrabackup_55: Error writing file 'UNOPENED' (Errcode: 32)
I've removed
[sst]
streamfmt = xbstream
from /opt/local/etc/my.cnf
Tuesday, December 10, 2013
Monday, October 14, 2013
Install Baikal server on a SmartOS zone
Baïkal is a Lightweight CalDAV+CardDAV server. The home page is http://baikal-server.com/
Baïkal needs some PHP modules. I use Apache as web server.
pkgin in apache-2.4.6 ap24-php53-5.3.27
pkgin in php53-pdo-5.3.27
pkgin in php53-xmlrpc-5.3.27
pkgin in php53-pdo_sqlite-5.3.27 php53-pdo_mysql-5.3.27
pkgin in php53-dom-5.3.27
Now you must configure php.ini, adding the modules.
vi /opt/local/etc/php.ini
extension=xmlrpc.so
extension=pdo.so
extension=pdo_mysql.so
extension=pdo_sqlite.so
extension=mbstring.so
extension=dom.so
The rest is pretty simple, following the install guide.
Baïkal needs some PHP modules. I use Apache as web server.
pkgin in apache-2.4.6 ap24-php53-5.3.27
pkgin in php53-pdo-5.3.27
pkgin in php53-xmlrpc-5.3.27
pkgin in php53-pdo_sqlite-5.3.27 php53-pdo_mysql-5.3.27
pkgin in php53-dom-5.3.27
Now you must configure php.ini, adding the modules.
vi /opt/local/etc/php.ini
extension=xmlrpc.so
extension=pdo.so
extension=pdo_mysql.so
extension=pdo_sqlite.so
extension=mbstring.so
extension=dom.so
The rest is pretty simple, following the install guide.
Monday, September 9, 2013
Monday, September 2, 2013
download old versions of SmartOS
Here https://download.joyent.com/pub/iso/ you can find older SmartOS versions.
Friday, August 9, 2013
MegaCLI on SmartOS
I'm working on a Dell R710 with PERC H700 controller.
# prtconf -v|grep -i raid
value='RAID controller'
value='MegaRAID SAS 2108 [Liberator]'
# prtconf -v|grep -i perc
value='PERC H700 Integrated'
...
Firts of all go to LSI website http://www.lsi.com/support/Pages/Download-Results.aspx and search by keyword "MegaCLI. Then download Latest MegaCLI (there is Solaris OS).
Unzip it and copy MegaCLI/MegaCli_Solaris/x86/MegaCli to your SmartOS global zone (under /opt/custom/bin for example).
Now edit it with vi (yes, with vi), search for mr_sas and change this word (the driver name) to dr_sas.
Without doing this the utility doesn't detect any controller.
i.e.
# ./MegaCli -adpCount
Controller Count: 0.
Exit Code: 0x00
Thanks to jacques, Nils and elijah on the smartos-discuss mailing list.
# prtconf -v|grep -i raid
value='RAID controller'
value='MegaRAID SAS 2108 [Liberator]'
# prtconf -v|grep -i perc
value='PERC H700 Integrated'
...
Firts of all go to LSI website http://www.lsi.com/support/Pages/Download-Results.aspx and search by keyword "MegaCLI. Then download Latest MegaCLI (there is Solaris OS).
Unzip it and copy MegaCLI/MegaCli_Solaris/x86/MegaCli to your SmartOS global zone (under /opt/custom/bin for example).
Now edit it with vi (yes, with vi), search for mr_sas and change this word (the driver name) to dr_sas.
Without doing this the utility doesn't detect any controller.
i.e.
# ./MegaCli -adpCount
Controller Count: 0.
Exit Code: 0x00
Thanks to jacques, Nils and elijah on the smartos-discuss mailing list.
Wednesday, August 7, 2013
using send_nsca from SmartOS global zone
If you don't want to install pkgin in the global zone, but you want to use Nagios send_nsca command, this is how I did it.
Inside a zone install the nagios-nsca package.
pkgin in nagios-nsca-2.9.1nb2
From the global zone, cd to the zone zfs and copy these files in a place of your own.
cd /zones/<uuid>
cp opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1 /opt/custom/nsca/
cp opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1 /opt/custom/nsca/
cp opt/local/lib/libmcrypt.so.4 /opt/custom/nsca/
Create a configuration file
vi /opt/custom/nsca/send_nsca.cfg
Now set the LD_LIBRARY_PATH variable and run the command
export LD_LIBRARY_PATH=/opt/custom/nsca; echo "host;passive_check;0;All seems to be OK" | /opt/custom/nsca/send_nsca -H nagios.ip -p 5667 -d ";" -c /opt/custom/nsca/send_nsca.cfg
Inside a zone install the nagios-nsca package.
pkgin in nagios-nsca-2.9.1nb2
From the global zone, cd to the zone zfs and copy these files in a place of your own.
cd /zones/<uuid>
cp opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1 /opt/custom/nsca/
cp opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1 /opt/custom/nsca/
cp opt/local/lib/libmcrypt.so.4 /opt/custom/nsca/
Create a configuration file
vi /opt/custom/nsca/send_nsca.cfg
Now set the LD_LIBRARY_PATH variable and run the command
export LD_LIBRARY_PATH=/opt/custom/nsca; echo "host;passive_check;0;All seems to be OK" | /opt/custom/nsca/send_nsca -H nagios.ip -p 5667 -d ";" -c /opt/custom/nsca/send_nsca.cfg
nsca and send_nsca version incompatibility
I have a Nagios NSCA server running on Debian squeeze. The installed nsca package is the 2.7.2
I want to send passive nagios checks from a SmartOS zone. But the pkgsrc version of nsca client package is 2.9.1
There are incompatibility issues between the 2.7 server 2.7 and a 2.9 client (example http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=670373).
Dropping packet with invalid CRC32 - possibly due to client using wrong password or crypto algorithm?
To resolve the problem without upgrading Debian to a newer release, you can use backports.
Following the instructions I've added a new source to apt
vi /etc/apt/sources.list.d/backports.list
deb http://mi.mirror.garr.it/mirrors/debian-backports squeeze-backports main
apt-get update
And I've installed the newer version of NSCA server.
apt-get -t squeeze-backports install nsca
Now the latest version of send_nsca available on SmartOS works.
Quick testing shows that the 2.7.2 clients seems to work with the 2.9.1 server.
I want to send passive nagios checks from a SmartOS zone. But the pkgsrc version of nsca client package is 2.9.1
There are incompatibility issues between the 2.7 server 2.7 and a 2.9 client (example http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=670373).
Dropping packet with invalid CRC32 - possibly due to client using wrong password or crypto algorithm?
To resolve the problem without upgrading Debian to a newer release, you can use backports.
Following the instructions I've added a new source to apt
vi /etc/apt/sources.list.d/backports.list
deb http://mi.mirror.garr.it/mirrors/debian-backports squeeze-backports main
apt-get update
And I've installed the newer version of NSCA server.
apt-get -t squeeze-backports install nsca
Now the latest version of send_nsca available on SmartOS works.
Quick testing shows that the 2.7.2 clients seems to work with the 2.9.1 server.
nagios plugin to compare (master/slave) SOA records
This is a Nagios script to compare the master ns and the slave ns SOA records.
So you can know if the slave zone is expired, or is not in sync with the master.
http://exchange.nagios.org/directory/Plugins/Network-Protocols/DNS/checkexpire/details
https://github.com/alcir/checkexpire.sh
So you can know if the slave zone is expired, or is not in sync with the master.
http://exchange.nagios.org/directory/Plugins/Network-Protocols/DNS/checkexpire/details
https://github.com/alcir/checkexpire.sh
Tuesday, August 6, 2013
mass destroy zfs snapshots
Please pay attention...
zfs list -t snapshot|awk '{print $1}' > /tmp/zfstodestroy
Edit the file to delete the snapshots you don't want to delete!
Then:
while read uuid; do zfs destroy $uuid; done < <(cat /tmp/zfstodestroy)
zfs list -t snapshot|awk '{print $1}' > /tmp/zfstodestroy
Edit the file to delete the snapshots you don't want to delete!
Then:
while read uuid; do zfs destroy $uuid; done < <(cat /tmp/zfstodestroy)
Friday, August 2, 2013
ENOSPC, open '/zones...metadata.json'
SmartOS.
Issuing this command
vmadm update f7c4fbb0-aa35-41d4-9d21-614caba785c7 max_physical_memory=2048
I got an error like
ENOSPC, open '/zones/f7c4fbb0-aa35-41d4-9d21-614caba785c7/config/metadata.json'
I try to enter in the zone
zlogin f7c4fbb0-aa35-41d4-9d21-614caba785c7
but I got
[Connected to zone 'f7c4fbb0-aa35-41d4-9d21-614caba785c7' pts/12]
No utmpx entry. You must exec "login" from the lowest level "shell".
[Connection to zone 'f7c4fbb0-aa35-41d4-9d21-614caba785c7' pts/12 closed]
WTF?
The zone reached the zfs disk quota. Disk full.
But also
vmadm update f7c4fbb0-aa35-41d4-9d21-614caba785c7 quota=50
returns
ENOSPC, open '/zones/f7c4fbb0-aa35-41d4-9d21-614caba785c7/config/metadata.json'
So the solution is simple:
zfs set quota=40G zones/f7c4fbb0-aa35-41d4-9d21-614caba785c7
vmadm update f7c4fbb0-aa35-41d4-9d21-614caba785c7 max_physical_memory=2048
Successfully updated VM f7c4fbb0-aa35-41d4-9d21-614caba785c7
Issuing this command
vmadm update f7c4fbb0-aa35-41d4-9d21-614caba785c7 max_physical_memory=2048
I got an error like
ENOSPC, open '/zones/f7c4fbb0-aa35-41d4-9d21-614caba785c7/config/metadata.json'
I try to enter in the zone
zlogin f7c4fbb0-aa35-41d4-9d21-614caba785c7
but I got
[Connected to zone 'f7c4fbb0-aa35-41d4-9d21-614caba785c7' pts/12]
No utmpx entry. You must exec "login" from the lowest level "shell".
[Connection to zone 'f7c4fbb0-aa35-41d4-9d21-614caba785c7' pts/12 closed]
WTF?
The zone reached the zfs disk quota. Disk full.
But also
vmadm update f7c4fbb0-aa35-41d4-9d21-614caba785c7 quota=50
returns
ENOSPC, open '/zones/f7c4fbb0-aa35-41d4-9d21-614caba785c7/config/metadata.json'
So the solution is simple:
zfs set quota=40G zones/f7c4fbb0-aa35-41d4-9d21-614caba785c7
vmadm update f7c4fbb0-aa35-41d4-9d21-614caba785c7 max_physical_memory=2048
Successfully updated VM f7c4fbb0-aa35-41d4-9d21-614caba785c7
Thursday, August 1, 2013
X11 forwarding request failed on channel 0
I was testing X11 stuff under a SmartOS zone, using the latest pkgsrc-2013Q2, but I got the following error doing an ssh
ssh dest.server -l root -X
X11 forwarding request failed on channel 0
The solution, as suggested by jperkin, is to add this line
XAuthLocation /opt/local/bin/xauth
to
/etc/ssh/sshd_config
and restart ssh.
Or you can disable Sun_SSH_1.5 service and install openssh.
ssh dest.server -l root -X
X11 forwarding request failed on channel 0
The solution, as suggested by jperkin, is to add this line
XAuthLocation /opt/local/bin/xauth
to
/etc/ssh/sshd_config
and restart ssh.
Or you can disable Sun_SSH_1.5 service and install openssh.
two new java tools for dicom files
Remember: I'm not a programmer and I'm not a dicom expert.
I've "released" two java "tools" for dicom files.
This tool (as the dcm2txt tool from dcm4che) read a file and print some dicom attributes: patient name, study date, patientid, studyid, studyiuid.
Or if a directory is passed (instead of a single file) it scan each file and print such attributes.
This tool scan a directory and copy each file to the destination directory creating a path like /destination/study_year/study_month/study_day/patientname_studyi
I've "released" two java "tools" for dicom files.
dcmfileget
https://github.com/alcir/dcmfilegetThis tool (as the dcm2txt tool from dcm4che) read a file and print some dicom attributes: patient name, study date, patientid, studyid, studyiuid.
Or if a directory is passed (instead of a single file) it scan each file and print such attributes.
dicomfileCopy
https://github.com/alcir/dicomfileCopyThis tool scan a directory and copy each file to the destination directory creating a path like /destination/study_year/study_month/study_day/patientname_studyi
Wednesday, July 31, 2013
bash while loop, function and ssh
Yesterday I struggled an entire morning with a while loop inside a bash script.
The loop ended even if there was a lot of work to do.
And the while loop ended only when I was calling a function inside another function.
The function contained an ssh call to a remote host to get a result.
Such ssh connection, opening a shell on the remote host, maybe killed my while environment?
The solution:
The loop ended even if there was a lot of work to do.
And the while loop ended only when I was calling a function inside another function.
The function contained an ssh call to a remote host to get a result.
Such ssh connection, opening a shell on the remote host, maybe killed my while environment?
The solution:
remote() { echo -e "Checking if it is the first time we zfs send to remote $1"
ssh $sshparam zfs list $1 </dev/null &>/dev/null
EL=$? if [ $EL -eq 0 ] then return 1 else return 0 fi }
</dev/null &>/dev/null
Thursday, May 23, 2013
mirth, smb and netapp filer
Mirth Connect uses jcifs java class.
It seems that there is some problem with authentication on Netapp (ontap) filer.
Thu May 23 15:07:20 CEST [netapp:auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: login from 192.168.0.190 rejected because bad local user password.
Try to add these lines to the Mirth config file mcservice.vmoptions
-Djcifs.smb.client.useExtendedSecurity=false
-Djcifs.smb.lmCompatibility=0
-Djcifs.smb.client.useBatching=false
And restart Mirth
It seems that there is some problem with authentication on Netapp (ontap) filer.
Thu May 23 15:07:20 CEST [netapp:auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: login from 192.168.0.190 rejected because bad local user password.
Try to add these lines to the Mirth config file mcservice.vmoptions
-Djcifs.smb.client.useExtendedSecurity=false
-Djcifs.smb.lmCompatibility=0
-Djcifs.smb.client.useBatching=false
And restart Mirth
Friday, March 22, 2013
smartos: add additional disk from a different zpool
I have 2 zpools
pool: zones
pool: zpool_repos
I want to add to a KVM machine a disk living not in the zones pool, but in the other one.
My VM is already provisioned.
First of all I must stop it.
vmadm stop <uuid>
Now I create a ZFS volume
zfs create -V 10G zpool_repos/mykvmdata
Now I create a json file for the update
vi adddisk.json
{
"add_disks": [
{
"media": "disk",
"model": "virtio",
"nocreate": "true",
"boot": false,
"path": "/dev/zvol/rdsk/zpool_repos/uuu"
}
]
}
Now I update the KVM machine.
vmadm update 5942c90f-ecbb-4acd-822f-43e1901e2eb6 -f dataset.json
That's all.
Now, after starting the virtual machine, from inside it, I must partition the new disk (i.e. /dev/vdb in linux) and format.
pool: zones
pool: zpool_repos
I want to add to a KVM machine a disk living not in the zones pool, but in the other one.
My VM is already provisioned.
First of all I must stop it.
vmadm stop <uuid>
Now I create a ZFS volume
zfs create -V 10G zpool_repos/mykvmdata
Now I create a json file for the update
vi adddisk.json
{
"add_disks": [
{
"media": "disk",
"model": "virtio",
"nocreate": "true",
"boot": false,
"path": "/dev/zvol/rdsk/zpool_repos/uuu"
}
]
}
Now I update the KVM machine.
vmadm update 5942c90f-ecbb-4acd-822f-43e1901e2eb6 -f dataset.json
That's all.
Now, after starting the virtual machine, from inside it, I must partition the new disk (i.e. /dev/vdb in linux) and format.
Thursday, March 14, 2013
iPxe server (on CentOS) to serve SmartOS
This document talks about how I set up an iPXE server on CentOS 6 mainly to serve SmartOS.
Although I've not understood some things (like undionly or the fact that there are many dhcp requests from the pxe client) the following procedure works for me
Optional.
Useful to get some network modules if the client uses gPxe (like KVM one) that is unable to work with iPxe
There are 2 files of importance, the kernel ("unix") and the boot archive ("boot_archive"). The paths to these files are significant and should not be omitted. They should be:
So:
Please note: change 20130111T010112Z with the current version
Although I've not understood some things (like undionly or the fact that there are many dhcp requests from the pxe client) the following procedure works for me
Links
- iPXE server using Ubuntu http://blog.alainodea.com/en/ipxe-smartos
- SmartOS wiki http://wiki.smartos.org/display/DOC/PXE+Booting+SmartOS
- The iPXE site http://boot.ipxe.org
- Take a look to http://networkboot.org/
Install dhcp and tftp services
yum install dhcp.x86_64 yum install tftp-server.x86_64
Get iPXE undionly
cd /var/lib/tftpboot/curl http://cuddletech.com/IPXE-100612_undionly.kpxe > undionly.kpxe
curl http://boot.ipxe.org/undionly.kpxe > undionly.kpxe
Download iPxe
cd /root mkdir git cd git/ git clone git://git.ipxe.org/ipxe.git cd ipxe/src make make bin/ipxe.pxe cp bin/ipxe.pxe /var/lib/tftpboot/
DHCP Configuration
It is useful to configure the DHCP server to provide IP addresses only to known MAC addresses. I think it is also useful to link MAC addresses with IP addresses. So a server always get the usual IP.
Note: 192.168.56.10 is the TFTP server (may be the same server as DHCP)
vi /etc/dhcp/dhcpd.conf |
ddns-update-style none; option domain-name "my.domain"; option routers 192.168.56.1; option domain-name-servers 192.168.56.2, 192.168.56.3; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.56.0 netmask 255.255.255.0 { deny unknown-clients; next-server 192.168.56.10; host hcn2 { hardware ethernet 00:19:99:e0:87:30; fixed-address 192.168.56.20; if exists user-class and option user-class = "iPXE" { filename = "menu.ipxe"; } else { filename = "undionly.kpxe"; } } } |
Configuring iPXE
vi /var/lib/tftpboot/menu.ipxe |
#!ipxe ######## MAIN MENU ################### :start menu Welcome to iPXE's Boot Menu item item --gap -- ------------------------- Operating systems --------------- item smartos Boot SmartOS (platform-20121228T011955Z) item --gap -- ------------------------------ Utilities ------------------ item shell Enter iPXE shell item reboot Reboot item item exit Exit (boot local disk) choose --default smartos --timeout 30000 target && goto ${target} ########## UTILITY ITEMS #################### :shell echo Type exit to get the back to the menu shell set menu-timeout 0 goto start :failed echo Booting failed, dropping to shell goto shell :reboot reboot :exit exit ########## MENU ITEMS ####################### :smartos set base-url ${boot-url}/images/smartos/20130111T010112Z/platform/i86pc kernel ${base-url}/kernel/amd64/unix -v -B console=text,smartos=true module ${base-url}/amd64/boot_archive boot || goto failed goto start |
Preparing the SmartOS image
From the SmartOS wiki:There are 2 files of importance, the kernel ("unix") and the boot archive ("boot_archive"). The paths to these files are significant and should not be omitted. They should be:
- (prefix)/platform/i86pc/kernel/amd64/unix
- (prefix)/platform/i86pc/amd64/boot_archive
So:
cd /var/lib/tftpboot/ mkdir images cd images/ mkdir smartos cd smartos curl https://download.joyent.com/pub/iso/platform-latest.tgz > /tmp/platform-latest.tgz cat /tmp/platform-latest.tgz | tar xz ls | grep platform- | sort | tail -n1 platform-20130111T010112Z mv platform-20130111T010112Z 20130111T010112Z cd 20130111T010112Z/ mkdir platform mv i86pc/ platform/
Please note: change 20130111T010112Z with the current version
Configuring iptables
To allow DHCPvi /etc/sysconfig/iptables-config |
IPTABLES_MODULES="ip_conntrack_tftp" |
vi /etc/sysconfig/iptables |
-A INPUT -m udp -p udp --dport 67:68 --sport 67:68 -j ACCEPT -A INPUT -m tcp -p tcp --dport 69 -j ACCEPT -A INPUT -m udp -p udp --dport 69 -j ACCEPT |
/sbin/service iptables restart
Configuring xinetd
To enable TFTPvi /etc/xinetd.d/tftp |
... disable = no ... |
/etc/init.d/xinetd restart
Tuesday, February 26, 2013
SmartOS: permanently set VNC port
vmadm update dece98e8-29d7-4394-8cf1-d0185e2258b7 vnc_port=35351
...
"vnc": {
"host": "192.168.0.40",
"port": 35351,
"display": 29451
},
...
Now, you can configure remmina to connect to 192.168.0.40:35351
...
"vnc": {
"host": "192.168.0.40",
"port": 35351,
"display": 29451
},
...
Now, you can configure remmina to connect to 192.168.0.40:35351
Friday, February 22, 2013
smartos vmunbundle exited with code 1
Smartos joyent_20130207T202554Z
vmadm send f59c8669-b709-42cb-98e8-a2f1fa1ee7bf |ssh destination.host vmadm receive
I get this error
vmunbundle exited with code 1
The problem was that, since I was doing some test, there was, on the destination, a zfs dataset for the same zone of a previous attemp.
Solution, on the destination,
zfs destroy zones/f59c8669-b709-42cb-98e8-a2f1fa1ee7bf
vmadm send f59c8669-b709-42cb-98e8-a2f1fa1ee7bf |ssh destination.host vmadm receive
I get this error
vmunbundle exited with code 1
The problem was that, since I was doing some test, there was, on the destination, a zfs dataset for the same zone of a previous attemp.
Solution, on the destination,
zfs destroy zones/f59c8669-b709-42cb-98e8-a2f1fa1ee7bf
Thursday, February 21, 2013
Links for today
Even if zip is not widely used under Unix: http://linux.about.com/od/commands/a/blcmdl1_zipx.htm
Friday, February 15, 2013
SmartOS, kvm, vnc and keyboard
SmartOS.
I have had problems with the keyboard using VNC to connect to a KVM guest.
Keyboard mapping I mean.
The solution working for me was:
vmadm update <uuid> qemu_extra_opts="-k it"
I have had problems with the keyboard using VNC to connect to a KVM guest.
Keyboard mapping I mean.
The solution working for me was:
vmadm update <uuid> qemu_extra_opts="-k it"
linux scp and umask
I use CentOS 5.x
The problem is that if I use scp or sftp to copy a file on such CentOS server, the umask is not taken in consideration.
Behavior of scp and sftp is different.
Btw, a solution is to use openssh 5, but I don't like to alter the distribution, since CentOS 5 install openssh 4.
Using google I've found different solution (pam? etc.) but no one works for me.
So, a way to make what I want is:
For sftp
vi /etc/ssh/sshd_config
...
# override default of no subsystems
# Subsystem sftp /usr/libexec/openssh/sftp-server
Subsystem sftp /bin/sh -c 'umask 0002; /usr/libexec/openssh/sftp-server'
and restart sshd
And for scp
vi /home/myuser/.bashrc
adding (0002 or what you want)
umask 0002
The problem is that if I use scp or sftp to copy a file on such CentOS server, the umask is not taken in consideration.
Behavior of scp and sftp is different.
Btw, a solution is to use openssh 5, but I don't like to alter the distribution, since CentOS 5 install openssh 4.
Using google I've found different solution (pam? etc.) but no one works for me.
So, a way to make what I want is:
For sftp
vi /etc/ssh/sshd_config
...
# override default of no subsystems
# Subsystem sftp /usr/libexec/openssh/sftp-server
Subsystem sftp /bin/sh -c 'umask 0002; /usr/libexec/openssh/sftp-server'
and restart sshd
And for scp
vi /home/myuser/.bashrc
adding (0002 or what you want)
umask 0002
Tuesday, February 5, 2013
smartos update_disks
I want to change the disk model in a KVM machine on SmartOS.
vi upddisk.json
{
"update_disks": [
{
"path": "/dev/zvol/rdsk/zones/e888a82a-bca1-49fda6d7-988debb09703-disk0",
"model": "ide"
}
]
}
vmadm update e888a82a-bca1-49fd-a6d7-988debb09703 -f upddisk.json
vmadm get e888a82a-bca1-49fd-a6d7-988debb09703
vi upddisk.json
{
"update_disks": [
{
"path": "/dev/zvol/rdsk/zones/e888a82a-bca1-49fda6d7-988debb09703-disk0",
"model": "ide"
}
]
}
vmadm update e888a82a-bca1-49fd-a6d7-988debb09703 -f upddisk.json
vmadm get e888a82a-bca1-49fd-a6d7-988debb09703
Tuesday, January 29, 2013
batch convert dv to avi xvid using ffmpeg
I am using Openindiana.
find bk/Video -type f -exec ~/mmpeg.sh {} \;
The mmpeg.sh script is based on sandipb script.
(Some variables may be unuseful).
https://github.com/alcir/mystuff/blob/master/mmpeg.sh
The converted files will go in /media/003-9VT166/converted/ directory using as name the full path starting from where find was launched.
So, the file
./bk/Video/2012_12/video01.dv
will become
/media/003-9VT166/converted/bk_Video_2012_12_video01.avi
Please note: I had no time to handle spaces in file names.
find bk/Video -type f -exec ~/mmpeg.sh {} \;
The mmpeg.sh script is based on sandipb script.
(Some variables may be unuseful).
https://github.com/alcir/mystuff/blob/master/mmpeg.sh
The converted files will go in /media/003-9VT166/converted/ directory using as name the full path starting from where find was launched.
So, the file
./bk/Video/2012_12/video01.dv
will become
/media/003-9VT166/converted/bk_Video_2012_12_video01.avi
Please note: I had no time to handle spaces in file names.
dcm4chee query study availability
This query can be useful to list in a range of time the sudies wich availability is ONLINE (0= online, 1= nearline).
SELECT distinct(study.study_iuid)
FROM series
JOIN study ON series.study_fk = study.pk
JOIN patient ON study.patient_fk = patient.pk
JOIN instance ON instance.series_fk = series.pk
where study.study_datetime >= '2000-01-01' and study.study_datetime <= '2010-01-31'
and study.availability = 0 ;
SELECT distinct(study.study_iuid)
FROM series
JOIN study ON series.study_fk = study.pk
JOIN patient ON study.patient_fk = patient.pk
JOIN instance ON instance.series_fk = series.pk
where study.study_datetime >= '2000-01-01' and study.study_datetime <= '2010-01-31'
and study.availability = 0 ;
Openindiana: install ffmpeg
pfexec pkg set-publisher -p http://pkg.openindiana.org/sfe
pfexec pkg set-publisher -p http://pkg.openindiana.org/sfe-encumbered
pfexec pkg install ffmpeg
pfexec pkg set-publisher -p http://pkg.openindiana.org/sfe-encumbered
pfexec pkg install ffmpeg
links for today
X11 connection rejected because of wrong authentication after sudo to another user
http://jianmingli.com/wp/?p=724
http://jianmingli.com/wp/?p=724
Monday, January 28, 2013
Another sql query for dcm4chee
Find study instance uid for all the files stored before a date.
SELECT distinct(study.study_iuid)
FROM series
JOIN study ON series.study_fk = study.pk
JOIN patient ON study.patient_fk = patient.pk
JOIN instance ON instance.series_fk = series.pk
JOIN files on files.instance_fk = instance.pk
where files.created_time < '2012-12-15';
SELECT distinct(study.study_iuid)
FROM series
JOIN study ON series.study_fk = study.pk
JOIN patient ON study.patient_fk = patient.pk
JOIN instance ON instance.series_fk = series.pk
JOIN files on files.instance_fk = instance.pk
where files.created_time < '2012-12-15';
Tuesday, January 15, 2013
useful links for today
zone provisioning on admin interface seems to get stuck
https://github.com/joyent/smartos-live/issues/158
Bad /etc/hosts entry when zone's IP is set to "dhcp"
zone-clone.markdown
smartos vmadm failed
Maybe this is only a workaround, like a good craftsman.
I was in this situation:
# vmadm list
UUID TYPE RAM STATE ALIAS
76ffffff-cc23-4501... OS 256 failed db1
No way to boot or halt the zone.
(Cannot to start vm from state "failed", must be "stopped")
(login allowed only to running zones ... is 'installed')
The only way to solve the issues, for me, was:
vi /etc/zones/76ffffff-cc23-4501-b5f3-41a69c65b321.xml
and search for a line containing "failed", then delete this line.
I was in this situation:
# vmadm list
UUID TYPE RAM STATE ALIAS
76ffffff-cc23-4501... OS 256 failed db1
No way to boot or halt the zone.
(Cannot to start vm from state "failed", must be "stopped")
(login allowed only to running zones ... is 'installed')
The only way to solve the issues, for me, was:
vi /etc/zones/76ffffff-cc23-4501-b5f3-41a69c65b321.xml
and search for a line containing "failed", then delete this line.
Friday, January 11, 2013
today useful links
Trying to chainload iPXE with full feature set from a lesser
featured one, whilst still being able to boot non-supported cards with
UNDI
https://gist.github.com/4008017Bootstrapping full iPXE native menu with customizable default option with timeout (also includes working Ubuntu 12.04 preseed install)
https://gist.github.com/2234639
Monday, January 7, 2013
virtualbox pxe
It seems that Virtualbox uses different PXE "roms" depending on the installation of Extension pack or not.
i.e. without extension pack, it seems to use a limited version of iPXE, and maybe some image format are not handled (Exec format error, Could not boot image, etc.)
i.e. without extension pack, it seems to use a limited version of iPXE, and maybe some image format are not handled (Exec format error, Could not boot image, etc.)
Friday, January 4, 2013
Centos Apache and Directory index forbidden
Today I struggled a little bit to make directory index module working.
I'm on Centos.
Installed apache (httpd) rpm.
No changes to configuration. Nothing.
I have created a conf file /etc/httpd/conf.d/my.conf
Alias /images "/var/www/images"
<Directory "/var/www/images">
Options Indexes MultiViews FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
</Directory>
I have created the /var/www/images directory.
In this directory I don't want to put any html page. I want to use the Apache autoindex module to serve a list of files contained in this directory.
WTF autoindex not working...
And in the logs /var/log/httpd/error_log you can read
[Thu Jan 03 03:44:05 2013] [error] [client 192.168.56.1] Directory index forbidden by Options directive: /var/www/images/
The problem was that in /etc/httpd/conf.d/welcome.conf there was
Options -Indexes
So:
- you can delete such file
- you can change in +Indexes
I'm on Centos.
Installed apache (httpd) rpm.
No changes to configuration. Nothing.
I have created a conf file /etc/httpd/conf.d/my.conf
Alias /images "/var/www/images"
<Directory "/var/www/images">
Options Indexes MultiViews FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
</Directory>
I have created the /var/www/images directory.
In this directory I don't want to put any html page. I want to use the Apache autoindex module to serve a list of files contained in this directory.
WTF autoindex not working...
And in the logs /var/log/httpd/error_log you can read
[Thu Jan 03 03:44:05 2013] [error] [client 192.168.56.1] Directory index forbidden by Options directive: /var/www/images/
The problem was that in /etc/httpd/conf.d/welcome.conf there was
Options -Indexes
So:
- you can delete such file
- you can change in +Indexes
linux: printing pdf in booklet format
I have a 44 pages PDF. I like trees. I want to save money.
So I want to print a booklet (page 1 and 44 on paper 1 side A, page 2 and 43 on paper 1 side B, etc.) from Ubuntu. I can't find the right way to accomplish this from Document Viewer (Evince?).
sudo apt-get install psutils
pdftops -nocrop -paper A4 -expand origin.pdf test.ps
psbook test.ps test-book.ps
psnup -2 test-book.ps test-book2.ps
We use the last command to put 2-pages together.
Note: /etc/papersize is consulted, so it is better to set a different page size, using psnup -pa4
References:
https://sites.google.com/site/ocastillofelisola/Home/linux-stuff/convertingpsfilesintobooklets
http://ubuntuforums.org/archive/index.php/t-1210934.html
So I want to print a booklet (page 1 and 44 on paper 1 side A, page 2 and 43 on paper 1 side B, etc.) from Ubuntu. I can't find the right way to accomplish this from Document Viewer (Evince?).
sudo apt-get install psutils
pdftops -nocrop -paper A4 -expand origin.pdf test.ps
psbook test.ps test-book.ps
psnup -2 test-book.ps test-book2.ps
We use the last command to put 2-pages together.
Note: /etc/papersize is consulted, so it is better to set a different page size, using psnup -pa4
References:
https://sites.google.com/site/ocastillofelisola/Home/linux-stuff/convertingpsfilesintobooklets
http://ubuntuforums.org/archive/index.php/t-1210934.html
Subscribe to:
Posts (Atom)