I was re-purposing my old desktop for machine learning with gpu support for Theno and Keras, I ran into several issues and ended up writing some code to workaround some of them and make others easier and more manageable. Someday I will write a more detailed series of articles on how I did that and what I learnt in the process, but today I just wanted to document the last steps I ran into when trying to convert my anaconda based adhoc ipython notebook server into a persistent service.

The initial logic came from, this blog post. However I ran into several issues, first the config is for native ipython not anaconda based ipython and it did not pull in env variables need for theano to pull in the nvc compiler optimizations needed for gpu support.

Here is the final config of the file /etc/systemd/system/ipython-nb-srv.service

[Unit]
Description=Jupyter Notebook Server

[Service]
Type=simple
Environment="PATH=/home/ipynbusr/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
ExecStart=/home/ipynbusr/anaconda3/bin/jupyter-notebook
User=ipynbusr
Group=ipynbusr
WorkingDirectory=/home/ipynbusr

[Install]
WantedBy=multi-user.target

After this you do

systemctl daemon-reload
systemctl enable ipython-nb-srv
systemctl start ipython-nb-srv

Some of the previous high-level steps are

  1. Install cuda packages for ubuntu
  2. Install Anaconda
  3. Create a py env
  4. Install the required packages (Theano, keras, numpy, scipy, ipython-notebook etc)
  5. Create .theanorc to make sure theano uses gpu
  6. Create .ipython notebook profile to run as server
  7. Create ipython notebook server service
  8. Enjoy your ipython notebooks from chomebook or windows machine 🙂

Was doing a upgrade on my VNX5200 at work, and halfway through the RDP session got disconnected when I connected back the Unisphere client was hung, I knew from the screen that it has gone all the way to the end and was waiting for me to do a post-install commit of the code. but I couldn’t and had to kill the run-away java client. Now I didn’t know how to commit the code, tried restarting the upgrade process, it rightfully said there is nothing to upgrade and exited. Then searched for a few minutes before running into this article https://community.emc.com/thread/123829?start=0&tstart=0 on emc community.

Here is the answer updated to my setup:

1) Log into Unisphere Manager
2) Right-click on array icon (by default, it is the serial number of the array)
3) Choose “Properties”
4) Select the “Software” tab
5) Highlight the package “VNX-Block-Operating-Environment”
6) Click on the “Commit” button

vnx5200-upgrade-commit

There was some confusion on how to specify multiple dns server ip address or domain search names with the Set-VMHostNetwork cmdlet. Turns out is a simple comma separated list that get treated as a parameter array. Here is an example.

Connect-viserver vCenterServerFQDNorIP
$ESXiHosts  = Get-VMHost
foreach ($esx in $ESXiHosts) {
     Get-VMHostNetwork | Set-VmHostNetwork -DomainName  eng.example.com -DnsAddress dnsAddress1,dnsAddress2
}

Or in one-line

Get-VMHost | Get-VMHostNetwork | Set-VmHostNetwork -DomainName  eng.example.com -DnsAddress dnsAddress1,dnsAddress2

Here’s a quick how to add iSCSI send targets on all hosts in your VC

Connect-viserver vCenterServerFQDNorIP
$targets = "StorageTargetIP1", "StorageTargetIP2"
$ESXiHosts  = Get-VMHost
foreach ($esx in $ESXiHosts) {
  $hba = $esx | Get-VMHostHba -Type iScsi | Where {$_.Model -eq "iSCSI Software Adapter"}
  foreach ($target in $targets) {
     # Check to see if the SendTarget exist, if not add it
     if (Get-IScsiHbaTarget -IScsiHba $hba -Type Send | Where {$_.Address -cmatch $target}) {
        Write-Host "The target $target does exist on $esx" -ForegroundColor Green
     }
     else {
        Write-Host "The target $target doesn't exist on $esx" -ForegroundColor Red
        Write-Host "Creating $target on $esx ..." -ForegroundColor Yellow
        New-IScsiHbaTarget -IScsiHba $hba -Address $target       
     }
  }
}

Now present the LUNs from storage and rescan all HBA to see the new storage on the hosts.

Get-VMHost | Get-VMHostStorage -RescanAllHba -RescanVmfs

Get command line access with SSH or ESXi Console as user root.

Then run the following commands to enable SNMP and configure SNMP v2 community, SysLocation and SysContact.

esxcli system snmp set -r
esxcli system snmp set -c esxsnmpusr
esxcli system snmp set -p 161
esxcli system snmp set -L "California, USA"
esxcli system snmp set -C admin@example.com
esxcli system snmp set -e yes

Finally run ‘get’ to confirm the configuration

esxcli system snmp get

   Authentication:
   Communities: esxsnmpusr
   Enable: true
   Engineid: 00000000000000aaaaaa1000
   Hwsrc: indications
   Largestorage: true
   Loglevel: info
   Notraps:
   Port: 161
   Privacy:
   Remoteusers:
   Syscontact: admin@example.com
   Syslocation: California, USA
   Targets:
   Users:
   V3targets:

That’s it, now open your favorite NMS software and start monitoring, at work I use Cisco Prime NAM, my choice at home is Observium