Compiling optiminer for AMD GPUs (Ubuntu/Debian)

This tutorial illustrates how to compile the sgminer fork from Optiminer in order to start Crypto currency mining under a Ubuntu/Debian OS.

Since I took me a while to find all the necessary tools, components and instructions I documented the installation process in this tutorial.

Please let me know if something is not working for you.

Have fun and happy mining!


  • Install necessary base packages to later compile the miner software
apt-get install ocl-icd-opencl-dev libleveldb-dev libminiupnpc-dev libjsoncpp-dev git cmake libcryptopp-dev libleveldb-dev libjsoncpp-dev libjsonrpccpp-dev libboost-all-dev libgmp-dev libreadline-dev libcurl3 libcurl4-gnutls-dev opencl-headers mesa-common-dev libmicrohttpd-dev build-essential automake autoconf libssl-dev libncurses5-dev unzip
  • Download the for or a newer version from [here]
  • Download the AMD-APP-SDKInstaller-v3.0.130.136-GA-linux64.tar.bz2 file or a newer version from [here]
  • Extract the AMD APP SDK and install it into /opt
tar jxvf AMD-APP-SDKInstaller-v3.0.130.136-GA-linux64.tar.bz2
./ --confirm
  • Extract and install the AMD ADL SDK into /opt as well
mkdir /opt/ADL_SDK_V10.2
unzip -d /opt/ADL_SDK_V10.2/
  • Finalize the tutorial with compiling sgminer
export CFLAGS="-O2 -Wall -march=native -I/opt/AMDAPPSDK-3.0/include:/opt/ADL_SDK_V10.2/include" LDFLAGS="-L/opt/AMDAPPSDK-3.0/lib/x86_64"
git clone --recursive && cd sgminer
cp /opt/ADL_SDK_V10.2/include/* ADL_SDK/
make -j4

Finally: systemd-networkd to the rescue

People like me, working with different Linux distributions and automation, are always looking for ways to simplify and bridge the different configuration styles of system configuration into a unified way. Up to a point where it does not matter if you prefer Ubuntu, Debian, RedHat, CentOS or what ever your choice of Linux OS is. Finally systemd comes to the rescue to solve the network configuration issue using the systemd-networkd manager.

So how can you manage network configuration using systemd-networkd ?

First check if you actually have it installed and running with

systemctl status systemd-network

If the service is not enabled, just enable it after you have added your interfaces.

To configure interfaces, or more precisely networks in systemd, you only need to add a config file with a .network suffix. In my case /etc/systemd/network/





The example above enables DHCP (v4) on the network interface ens33, a VMWare interface and yes I run VMWare on my MacBook, while additionally adding secondary IP addresses for haproxy testing purposes.

Once the configuration is completed, enable and restart the systemd-networkd service:

systemctl enable systemd-network
systemctl restart systemd-network

The networkctl command can now be used to monitor the lifecycle of an network:

Pretty cool, right! Finally one network manager to rule them all.

More information can be found at the systemd-networkd man page, documenting many more available options via

How much GPU RAM does ETH Mining use?

So I had this question going around in my head for a while.
And if you use an NVIDIA card, the question can be answered pretty fast. Just use the CLI tool nvidia-smi:

Screenshot: nvidia-smi

So with the current ETH DAG #131 you can comfortably mine until April 2018, when the expected DAG size would exceed the RAM of a 3GB GPU.

It’s expected that the ETH network will switch to a “proof of stake” algorithm by then, more here. So don’t waste your money on 8GB cards.

Tales from the crypt: Neutron metadata issues

I’m operating OpenStack since 2014 and have come across a significant number of issues, mainly around Neutron; Which make sense, knowing the importance of Neutron inside OpenStack and without proper function all your workload has no access to the network.

This particular situation we are looking at was reported as a performance issue for the Neutron metadata service, in a Neutron Linux bridge ML2 managed environment.

The Neutron metadata service implements a proxy in between the OpenStack instance and the Nova and Neutron services to provide Amazon AWS EC2 style metadata.
This Neutron service is important for user instances for various reasons including:
• Cloud Placement Decisions (What is my public IP etc)
• User Scripts and SSH Key injection into the boot process (typically via cloud-init)

Performance issues, resulting in client timeouts or service unavailability of this service directly impacted cloud user workload, which led to application unavailability. The issue was compounded by operating over 1000 instances inside one layer 2 network.

The issue was further more compounded by operating over 1000 instances inside one Neutron layer 2 network.
The way Neutron provides this service is by wrapping into a Linux network namespaces and running a HTTP proxy server, the neutron-ns-metadata-proxy. Network namespace are common practice to separate routing domains in Linux, allowing custom firewall (iptables) and routing processing compared to the host OS. Additionally, the service scales per Neutron L2 network, a crucial information moving forward.

What happened to this service?

A Rackspace Private Cloud OpenStack customer was reporting response times larger than 30 seconds for any request to the Neutron metadata service. While initial debugging on the user instances revealed that metadata requests got intercepted by a security appliance, excluding the standard metadata IP, from the proxy configuration via

export no_proxy="localhost,,localaddress,,"

did not solve the issue. At this point I knew the issue was related to the Neutron service or the background service it uses, mainly Nova API (compute) and RabbitMQ (the OpenStack message bus).
Looking at the request the Neutron service handles, I identified an unusual pattern in the frequency and realized that the configuration management Chef was requesting the metadata, beyond the standard expected behavior if OpenStack instances boot/reboot.
From previous issues I knew that the Chef plugin ohai played a major role and inefficiencies were known in regards to HTTP connection handling, mainly the lack of supporting HTTP persistence.
Continuing the research on the Neutron service and looking for ways to improve response times, I identified that the neutron-ns-metadata-proxy service was only capable of opening 100 Unix sockets to the neutron-metadata-agent. These sockets are used to talk to the Neutron metadata-agent across the Linux Network namespace, without opening additional TCP connections internally, mainly as performance optimization.

Unable to explain the 100 connections limit at first, especially in absence of Neutron backend problems (Neutron server) or Nova API issues, I began looking at the neutron source code and found a related change in the upstream code.
The Neutron commit was adding an option to parameterize the WSGI threads, WSGI is used as web server gateway for Python, but also lowering the default limit from 1000 to 100. This crucial information was absent in any Neutron release notes.

More importantly, we just found our 100 Unix sockets limit

This also explained the second observation that the connections to the Neutron metadata service got queued and caused the large delay in response times. This queueing was a result of using a network event library eventlet and greenlet combination, a typical way of addressing non-blocking I/O in the Python environment.

So what comes next?

Currently I am looking to solve the problem in multiple ways.
The imminent problem should be solved with a Chef-ohai plugin fix as proposed per Chef pull request #995 which finally introduces persistent HTTP connections and drastically reducing the need for parallel connection. First results are encouraging.

More importantly the Neutron community has re-implemented the neutron-ns-metadata-proxy with HAProxy (LP #1524916) to address performance issues. The OpenStack community needs verify if the issue is still occurring.

Alternatively, there are Neutron network design decisions that can assist with these problems. For example, one approach is to reduce the size of a Neutron L2 network to smaller than 23 bits, which allows Neutron to scale out the metadata service.

This approach allows the option to create multiple Neutron routers, scaling out the Neutron metadata service onto other Neutron agents, where one router is only responsible for serving the Neutron Metadata requests. This is especially the situation when the configuration option enable_isolated_metadata is set to True and project/tenant networks are attached to Neutron routers.

So as usual, Neutron keeps it interesting for us. Can’t wait to dissect Neutron Metadata service in a DVR environment. More to come …..

Spice console issues with RHEL/CentOS 7 instances ?

After I deployed Openstack Icehouse we did notice spice html5 proxy console issues in particular with CentOS 7 and RHEL 7 guests. Those guest consoles did show issues with the character echoing, you where not able to see what you have typed inside the terminal. I did track this issue down to  a spice html5 proxy issue whenever the guest is using a frame buffer enable console. After I did disable the frame buffer mode and switched the console to a text mode, the guest console was finally usable. Here the instructions :

Please add inside the /etc/default/grub config file the option “nofb nomodeset” to the GRUB_CMDLINE_LINUX variable and regenerate the grub2 config.

  • Tested configuration /etc/default/grub :
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_TERMINAL="serial console"
GRUB_CMDLINE_LINUX="console=ttyS0 console=tty0 crashkernel=auto vconsole.keymap=us nofb nomodeset"
  • Rebuilt grub2 config

grub2-mkconfig -o /boot/grub2/grub.cfg

After the mandatory instance reboot the console will boot in text mode only and not using any frame buffer graphic device. The console should work as desired at this point.

Ever wondering why Windows Guests come up with the wrong time running inside Openstack ?

Openstack is starting the instances in UTC time when using kvm, the simulated guest hardware clock is always to UTC. This is independent from the host clock setting.

Windows OSes only assumes the hardware clock is set to local time so it boots up in UTC time until the time synchronization against the finishes and corrects it to the desired time zone.

To change the hardware clock in windows, you can add this registry entry :


That will boot the instance with the correct time, since the hardware inside the guest and host are lining using the same time zone information.

Additionally please note, that Microsoft Windows Server 2008/7 had a High CPU issue when changing to DST.

That has been fixed with the hot fix :

2800213 High CPU usage during DST changeover in Windows Server 2008, Windows 7, or Windows Server 2008 R2

Update per 6/25/2015: This hot-fix is only applicable for older Openstack releases (Havana and lower). Newer releases of Openstack do start the guest inside the localtime zone, local to the host. Additionally Openstack needs to be aware that the image you’re using is a window guest and you have to set the os_type image property to windows. But beware for errors around those glance properties. There are known issues that the image properties are not retained if you create new images from existing nova instances.

Want to schedule AWS snapshots?

For all of you who has a need to schedule AWS snapshots and are not so familiar with Linux shell scripting, here my code how I schedule EC2 snapshots.

I recommend to setup a dedicated script host in your AWS region where you can execute all your scripts. Usually a t1.micro AWS Linux instance will suffice.

Environment variable you need to access AWS EC2 API :

export AWS_CREDENTIAL_FILE=$HOME/.awssecret

The file .awssecret has a simple format :


which you’ll get once you create your user and generate a AWS key at the IAM user management console.

How do you call the script :

source $HOME/.bash_profile ; $HOME/bin/

Following a example how I am using the script in a cron :

#Backup of XXX
00 00 * * * ( source $home/.bash_profile ; $home/bin/ us-west-1 vol-f233464 10 )

Before you run the script, I would test your environment if you have everything correctly setup :

 ec2-describe-snapshots --hide-tags --region us-west-1

Following now the code for my auxiliary script:

# ec2-describe-snapshots --hide-tags --region us-west-1 -F volume-id=vol-xxxxxx
# output:
# SNAPSHOT snap-xxxxxx vol-xxxxxx completed 2013-09-19T23:24:26+0000 100% 519544898336 25 mysql 5.6
export PATH=$PATH:/opt/aws/bin
usage() {
 echo -e "$0\t  ";
 echo -e "$0\tus-west-1 vol-12345 31";
 exit 1;
makeSnapshot() {
 local r=$1, vol=$2
 echo "Creating new snapshot for volume $vol"
 ec2-create-snapshot --region $region $vol -d "Backup $(date +'%Y%m%d%H%M%S') of $vol"
 export RET=$?
deleteSnapshot() {
 local r=$1, snap=$2
 echo "Deleting oldest snapshot $snap"
 ec2-delete-snapshot --region $region $snap
 export RET=$?
test -z $1 && usage
test -z $2 && usage
test -z $3 && backlog=5 || backlog=$3
snaps=( $( ec2-describe-snapshots --hide-tags --region $region -F volume-id=$volume | egrep -o 'snap-[0-9A-Za-z]+' ) )
if [ $nosnaps -lt $backlog ]; then
 makeSnapshot $region $volume
 test $RET -gt 0 && exit 1 || exit 0
 lastsnap=$( let $nosnaps-1 )
 oldestTS=$( ec2-describe-snapshots --hide-tags --region $region -F
 "volume-id=$volume" | egrep -o "Backup [0-9]+ of" | egrep -o '[0-9]+' | sort | head -n1 )
 snap=$( ec2-describe-snapshots --hide-tags --region $region -F "volume-id=$volume" -F "description=*${oldestTS}*" | egrep -o 'snap-[0-9A-Za-z]+' );
 deleteSnapshot $region ${snap}
 test $RET -gt 0 && exit 1
 makeSnapshot $region $volume
 test $RET -gt 0 && exit 1 || exit 0

KVM Live Migration (RedHat)

Live Migration using shared storage

I really love the feature to migrate running VMs from one Linux hypervisor to another without having the burden of pooling or the necessity to have some sort of shared storage attached. Although migrations using shared storage, e.g.NFS, are faster and easier to accomplish. The migration can be initiated using the virt-manager GUI tool or even simpler at the virsh CLI. As a requirement I installed libvirtd and opened the network communication for libvirtd (listen_tcp to 1 @/etc/libvirt/libvirtd.conf). Also check your firewall settings on the hypervisor, if the libvirtd port is open. (as root: netstat -ntlp |grep libvirtd)

Following example shows how to migrate a VM over network using libvirtd. You should always enable TLS for libvirtd, but encryption is not always supported by 3rd party products like Cloudstack :

sudo virsh migrate --live --persistent --p2p --tunnelled <VM> qemu+tcp://<hypervisor>/system

Important is the –persistent option, which ensures the new VM on the target hypervisor stays persistent. If you don’t use the option, the VM configuration will automatically be removed from the target hypervisor and you have to start the VM on the old machine.

I usually use a temporary live migration during hardware maintenance or overload situations, with the intention to run the VM on the old metal afterwards.


Live Migration using local storage

KVM allows you to live migrate a VM from local to local storage. The only requirement is that you have enough RAM and a destination disk image available with the same disk size. This image needs to reside at the same path and file name.

  • Create a new disk on the destination KVM host
sudo qemu-img create -f qcow2 /var/lib/libvirt/images/<VM>.img 2G
  • Start migration on the source KVM host
sudo virsh migrate --live --p2p --tunnelled --persistent --copy-storage-all <VM> qemu+tcp://<hypervisor>/system




Install Windows 8 Pro Update without a previous product installed

Some people like me wants to install a fresh installation of Windows 8 without having Windows 7 pre installed. It’s faster and probably cleaner too. The only problem is that the early Windows 8 Pro Keys were Update Keys so they can’t be used for a full installation. After searching at Google I found a way to tell Windows that is was installed through the update process and finally activate my key :

  • Start regedit as administrator (Windows+R key and type regedit)
  • Go to HKEY_LOCAL_MACHINE/Software/Microsoft/Windows/CurrentVersion/Setup/OOBE
  • Edit MediaBootInstall from 1 to 0
  • Start a command prompt as administrator
  • Type slmgr /rearm and reboot with shutdown /r
  • After restart, go into the command prompt as administrator again
  • Type slui.exe 3 and enter your key
  • If the last step doesn’t work, reboot and try the last command again.

    Gun Control heats up again!

    In the shadow of the recent tragedy at Sandy Hook Elementary School in Newtown, Connecticut, the Anti-Gun movement showed now respect and tries to use the News, Media and current situation to push more Anti-Gun laws. For those who believe that’s the answer should carefully listen to this video.
    It can’t better express my thoughts than anything else right now. Don’t get me wrong, everything what can be done to protect our children
    should be discussed but not on costs of taking rights like the Second Amendment away.
    Believe me, I’m originally from Germany and know first hand that laws can not prevent criminal activities like the Winnenden Shooting, Germany. I believe we have to discuss what has changed in our society that leads to those tragedies (Social security, fair changes for everyone, availability of violent media, etc. ) and how we can mitigate those tragedies.

    My truly condolences go to all families who have lost their loved ones.