Upgrading PostgreSQL Using Homebrew

On October 5th the PostgreSQL Global Development Group announced the release of PostgreSQL 10. It comes with tremendous amount of new features like

  • Table partitioning
  • Logical replication
  • Improved parallel queries
  • Stronger password hashing
  • Durable Hash Indexes
  • and more.

A nice list, including explanations can be found on Robert Haas’ blog.

This post explains how to upgrade to the latest version of PostgreSQL on macOS using Homebrew. At the time of this writing I was using macOS 10.30 High Sierra and Postgresql 9.6.5.

  1. Take a backup of your databases just in case anything goes wrong, you can recreate the data:
    $ pg_dumpall | gzip > all_db.sql.gz
  2. Stop the old server:
    $ brew services stop postgresql
  3. Install the new version (the old version remains installed):
    $ brew update
    $ brew upgrade postgresql

    You’ll see this message at the end of the upgrade process:

    To migrate existing data from a previous major version of PostgreSQL, see:
    You will need your previous PostgreSQL installation from brew to perform
    `pg_upgrade` or `pg_dumpall` depending on your upgrade method.
    Do not run `brew cleanup postgresql` until you have performed the migration.
    To have launchd start postgresql now and restart at login:
    brew services start postgresql
    Or, if you don't want/need a background service you can just run:
    pg_ctl -D /usr/local/var/postgres start

    Take a look in /usr/local/Cellar/postgresql. You should see both your old installed version and version 10.0:

    $ ls -l
    total 0
    drwxr-xr-x  12 stade  admin  384 Oct 18 19:28 10.0
    drwxr-xr-x  12 stade  admin  384 Oct  1 12:48 9.6.5

    The default linked binaries, in /usr/local/bin now belong to the 10.0 installation, but the database directory has not yet been upgraded.

    $ which psql
    $ ls -l /usr/local/bin/psql 
    lrwxr-xr-x  1 stade  admin  34 Oct 18 19:28 /usr/local/bin/psql -> ../Cellar/postgresql/10.0/bin/psql
    $ psql -V
    psql (PostgreSQL) 10.0

    If you need to you can switch your binaries between the old and new versions with brew switch postgresql . For now keep the new version active as we’ll be using commands from the new version to complete the upgrade.

  4. Move the old cluster data directory out of the way:

    $ mv /usr/local/var/postgres /usr/local/var/postgres.old
  5. Initialize a new cluster data dir:
    $ initdb /usr/local/var/postgres.new
  6. Run the upgrade process:
    $ pg_upgrade -b /usr/local/Cellar/postgresql/9.6.5/bin/ \
    -B /usr/local/Cellar/postgresql/10.0/bin/ \
    -d /usr/local/var/postgres.old/ \
    -D /usr/local/var/postgres.new/

    Use the appropriate old and new binary and cluster directory locations, if they’re different from the ones I had.

  7. Move the new cluster into place:

    $ mv postgres.new postgres
  8. Start the new server:
    $ brew services start postgresql
    ==> Successfully started `postgresql` (label: homebrew.mxcl.postgresql)
    $ pg_ctl -D /usr/local/var/postgres/ status
    pg_ctl: server is running (PID: 59964)
    /usr/local/Cellar/postgresql/10.0/bin/postgres "-D" "/usr/local/var/postgres"

    If you’ve made any customizations to the database connection configuration files pg_hba.conf and pg_ident.conf you will probably want to copy those changes over to the new cluster.

  9. Run table analyzer to update statistics, check that your data looks right:

    $ ./analyze_new_cluster.sh
    $ psql
    psql (10.0)
    Type "help" for help.
    stade=# \d
    stade=# \q
  10. If the new data looks right, remove the old data cluster and the old software
    $ ./delete_old_cluster.sh
    $ brew cleanup postgresql
  11. In case you had any extensions installed you will have to install them again, or your Postgresql database won’t might not start.

Monitoring a Synology Diskstation with Munin

I have been using Munin to monitor the health of my Raspberry Pi for while now. As I have more devices installed in my network I was looking for a way to monitor these devices as well. As Munin uses a client-server model you are required to install the Munin node on the device to be monitored. Every five minutes the Munin server polls its clients for the values and creates charts using RRDTool.

One of the devices is a network attached storage by Synology I use for backups as well as an external storage for movies, music and other stuff. The Diskstation is basically a server that runs a customized Linux (Diskstation Manager). In various forums I found information about how to set-up the Synology Diskstation as Munin node. An easier way is to use the Simple Network Management Protocol (SNMP). Just activate the SNMP service in the control panel of the Diskstation, give it a community name and you are all set:

Now tell the Munin master the community name, so it can fetch data from the Diskstation. Therefor create a file inside /etc/munin/plugin-conf.d/ folder with the name of the host (e.g. diskstation) containing two lines:

env.community secret_community_name

The community string acts as password here. One Note: Using the community-based SNMP Version 2 (SNMPv2c) the data is sent in clear text (unencrypted) over the network. It should only be used inside LANs behind firewalls, not in WANs. If secure is an issue you should consider using SNMP Version 3 (SNMPv3).
The Munin master now knows the community name yet is unaware of what data to collect from the device. You can use the command munin-node-configure to discover those:

sudo -u munin munin-node-configure -snmp diskstation -snmpcommunity secret_community_name -shell

Munin now polls the device „diskstation“ using the SNMP-community „secret_community_name“ and shows the corresponding symlinks which have to be added to /etc/munin/plugins. You could either copy and execute them manually or just pipe them using | bash. After creating the symlinks add the following host to your /etc/munin/munin.conf to have the Diskstation appear as separate host. (Note: The IP address has to be the one of the Munin master which does the polling, not the IP address of the Diskstation).

use_node_name no

Once added the last step is restarting the Munin node

sudo /etc/init.d/munin-node restart

After a couple of minutes the Munin server fetches the first values, stores them in its RRD files and generates the charts:

SolarPi – A Flask powered photovoltaic monitor

After collecting some photovoltaic data using PikoPy and a some readings from the residential meter it was time to put everything together. The data is collected by a couple of scripts triggered by a cronjob every five minutes.

$ crontab -l
*/5 * * * * python /home/solarpi/kostal_piko.py
*/5 * * * * python /home/solarpi/collect_meter.py
*/15 * * * * python /home/solarpi/collect_weather.py

The results are then written into a SQLite database. I chose SQLite over PostgreSQL for its lighter footprint on the Raspberry Pi. And since we will have no concurrent writes SQLite should be capable serving as a database backend.

The sole purpose of the web application is to fetch data from the database and render the results. I decided to use Flask, a microframework for Python based on Werkzeug and Jinja 2. For the Frontend I used the Bootstrap based SB Admin 2 Theme.

Initially I used SQLAlchemy as an Object Relational Mapper where I defined my models in Python:

class PVData(SurrogatePK, Model):
__tablename__ = 'pv_data'
id = Column(db.Integer(), nullable=False, primary_key=True)
created_at = Column(db.Text(), nullable=False, default=dt.datetime.utcnow)
dc_1_u = Column(db.Integer(), nullable=True)
dc_1_i = Column(db.Float(), nullable=True)
ac_1_u = Column(db.Integer(), nullable=True)
ac_1_p = Column(db.Integer(), nullable=True)
dc_2_u = Column(db.Integer(), nullable=True)
current_power = Column(db.Integer(), nullable=True)
daily_energy = Column(db.Float(), nullable=True)
total_energy = Column(db.Integer(), nullable=True)

While using an ORM is very helpful for CUD operation I decided to keep my read queries using raw SQL instead.

The main view is a dashboard presenting all kinds of interesting KPIs like yielded, imported, exported Energy and how much money was received for the sold energy:

SolarPi Dashboard

Furthermore there are detailed charts available for each day at a five minute interval, or the last 7 days and 30 days, respectively:

<a href="https://i2.wp.com/blog.tafkas.net/wp-content/uploads/SolarPiDailyChart.png”>SolarPi Daily Chart

The project is live at: http://solarpi.tafkas.net

The code is on Github at https://github.com/Tafkas/solarpi

PikoPy: A python package for working with a Piko Inverter from Kostal

The first step of my plan, building a Raspberry Pi based photovoltaic monitoring solution, is finished. I created a python package that works with the Kostal Piko 5.5 inverter (and theoretically should work with other Kostal inverters as well) and offers a clean interface for accessing the data:

import pikopy

#create a new piko instance
p = Piko('host', 'username', 'password')

#get current power
print p.get_current_power()

#get voltage from string 1
print p.get_string1_voltage()

The next steps are collecting the data and processing it.

The library can be downloaded from https://pypi.python.org/pypi/pikopy
The code is on Github: https://github viagra en ligne achat.com/Tafkas/KostalPikoPy

Determine your Fitbit stride length using a GPS watch

I have been carrying my Fitbit One for a little over two years with me and it keeps tracking my daily steps. It also tracks my distance covered by multiplying those steps using the stride length which you can either provide explicitly or implicitly setting your heights. In the winter of 2012 I bought my first Garmin Forerunner 410 (replaced by a Garmin Forerunner 920XT) GPS watch to help me track my running (and other outdoor) activities. Since then I have worn it at every activity. Additionally before every running session I started a Fitbit activity to measure my step count. Over the last 18 month I have recorded 69 runs using both, the Fitbit and the watch. I put the data in a csv file so I could load it into R. Then I performed a linear regression to determine my stride length.

<img src="https://i0.wp.com/blog.tafkas.net/wp-content/uploads/FitbitStrideLength.png?resize=600%2C400" alt="Fitbit Stride Length Regression" class="alignnone size-full wp-image-215" srcset="https://i0.wp.com/blog.tafkas.net/wp-content/uploads/FitbitStrideLength.png?w=1200 1200w, https://i0.wp.com/blog.tafkas.net/wp-content/uploads/FitbitStrideLength.png?resize=300%2C200 300w, https://i0.wp.com/blog le viagra de l.tafkas.net/wp-content/uploads/FitbitStrideLength.png?resize=1024%2C683 1024w, https://i0.wp.com/blog.tafkas.net/wp-content/uploads/FitbitStrideLength.png?resize=624%2C416 624w” sizes=”(max-width: 600px) 100vw, 600px” data-recalc-dims=”1″ />

In an ideal world, where the GPS watch would perform with a 100% accuracy and the Fitbit would recognize every single step taken, the regression fit would meet the origin. Unfortunately neither of them is that accurate. Our model gives us an intercept of 398.46m with a slope of 1.00421m and an adjusted R^2 of 0.97505 Based on that I have covered a whole lap of a competitive running track before I made a single step. 😉

I have noticed that the placement of the Fitbit has a huge impact on the amount of steps recorded. Attached to the pocket of my running shorts it seems to miss certain steps. I get much better results attaching it directly at my waist. Overall I am quite satisfied with the accuracy provided by the Fitbit. If you do a lot of interval training or vary your stride from session to session your result will be far more off.

The code for the regression is on Github:
.gist table { margin-bottom: 0; }

A Raspberry Pi photovoltaic monitoring solution

A friend of mine had a photovoltaic system (consisting of 14 solar panels) installed on his rooftop last year. As I was looking for another raspberry pi project I convinced him I would setup a reliable monitoring solution that will lead him to an access to the data in real-time data. The current setup comes with an inverter by the company Kostal.

Kostal Piko 5. 5 Inverter

The Kostal Piko 5.5 runs an internal web server showing statistics like current power, daily energy, total energy plus specific information for each string. However, the visual layout of the the web page and the bundled software look like they were designed in the late 90s and development has been abandoned:

Kostal Piko 5.5 Web Interface

The power inverter itself logs data at either a 15 minutes interval for about 100 days or at an hour interval for about 400 days. At the current setting the 15 minutes interval has been selected. Furthermore my friend took notes irregularly from the built-in liquid-crystal display.

The final plan is now set. A Raspberry Pi would fetch data from the inverter, store it in a database and a web application would render the results to a browser:

SolarPi SetUp

The project is live at http://solarpi.tafkas.net

Analyzing Sleep with Sleep Cycle App and R

I have been tracking my sleep for almost two years now using my Fitbit. I started with the Fitbit Ultra and then moved on the the Fitbit One after it came out. In October 2013 I found out about the Sleep Cycle (Link) app for the iPhone. For weeks, Sleep Cycle was listed as the best-selling health app in Germany, where currently (as of January 2014) it is in second place. The program promises, to wake you up in the morning without being tired. This is indeed possible if the alarm goes off in light sleep and not in deep sleep. It also allows you to set some kind of sleep music (white noise) to assist you to fall asleep. After reading all the positive reviews on the AppStore I decided to give it a try.

The app promises to wake you up in a time frame up to 30 minutes prior to the alarm you set if it detects your movement in the morning. Even more important to me than the actual smart alarm feature was the possibility to collect some data while sleeping. In the morning you are presented with a chart of your sleep pattern of last night:
Sleep Cycle Screenshot of last night

The app also allows to export the database as a comma separated file containing: time you went to bed, time you woke up, sleep quality in %, wake up mood and user defined sleep notes. This gives you the opportunity to do some more analysis. I decided to fire up R and create my own charts.

So far I have used the app to track 100 nights of sleep and decided to peak into the data. Let’s take a look how long I slept each night:
Sleep Duration over time

It looks like the longer I slept the higher the sleep quality is. A scatter plot of the data gives:
Sleep time vs. sleep quality

The chart takes also the sleep notes into consideration. You can see clearly that sleeping away from home results in lower sleep quality. The same applies for exercising (note: I tagged a sleep with exercising when I worked out late in the evening). On the contrary taking a melatonin (dosage 3mg) increased the sleep quality.

Averaging the sleep quality by month shows, that the January worse than the previous month. One explanation is a vacation I took, where I did not sleep so well at all.
Average sleep quality by month

The R code for the data wrangling and the charts:

Berlin Marathon 2014 Participants

After the 2013 Berlin Marathon sold out in less than four hours, the organizers decided to alter the registration process for 2014. First there was a pre-registration phase followed by a random selection from the pool of registrants to receive a spot. Those who were selected had to register until November 11th, 2013. Any spots that were not confirmed till the 11th would be offered to pre-registered candidates according to the order in which they were randomly selected.

At is a list of all registered participants of the event. Being curious how many runners are on the list I had two options: 1. going through the entire list and counting or 2. download the entire list and let the computer do the math. I chose the latter. If you checked the website already you saw that they present only parts of the list at a time a reload it asynchronously while you scroll down. I wrote a little python script that queries the data and saves the JSON response to a csv file.

Checking the file we see there are only 16,707 participants so far. Sure, there are spots sold to agencies or given to sponsors, but how many will be handed out in the second wave? Until they announce the results let’s look the data:

The distribution of the year of birth shows a bimodal pattern. Most runners are born in the late 60s or early 70s and then there is a second spike a around 1980.
Looking at the top 10 participating nations it is no surprise that Germany stands out by far. Followed by Great Britain and Denmark. The only non-European country in the top 10 are the United States of America:

The code for generating the images:

Update: The official registration period is over and 23,286 runners have signed up.

Installing Oracle Java 7 SDK on the Raspberry Pi

Two days ago the official hard-float Oracle Java 7 JDK has been announced on the official Raspberry Pi blog. Prior to this there was only the OpenJDK implementation which was lacking performance.

Furterhmore the Raspberry Pi Foundation announced that future Raspbian images would ship with. Oracle Java by default.

If you want to give it a spin you can install the JDK with:

sudo apt-get update && sudo apt-get install oracle-java7-jdk

Using htop to monitor system processes on the Raspberry Pi

If you work a lot on the command line you are probably familiar with the top utility to see what process is taking the most CPU or memory. There’s a similar utility called htop that is much more powerful. On top of the information that top provides, htop additionally shows your usage per CPU, as well as a meaningful text graph of your memory and swap usage right at the top.

htop running on the RaspberryPi

As if that wasn’t enough, you can just use your up/down arrow keys to select a process, and then you can kill it with the F9 key if you like, or you can change the priority by using the F7 and F8 keys.

To install htop simply use
sudo apt-get install htop

Once installed you can start it using