Having trouble updating WordPress if your server is running vsftp?

The latest update to vsftp has, in their own words:

– Add stronger checks for the configuration error of running with a writeable
root directory inside a chroot(). This may bite people who carelessly turned
on chroot_local_user but such is life.

This kind of makes it useless for a virtual user setup. A work around has been published at https://bbs.archlinux.org/viewtopic.php?pid=1038842#p1038842 .

Essentially, if you set chroot_local_user=NO in /etc/vsftpd.conf it will solve the problem, but then you have the security issues of not having a chroot’d guest.

Arduino Gas Meter Sensor – Part 1

Motivation

My newly ordered Arduino Uno came in the post today so I got to start building with it!

An upcoming sensing deployment needs a way of sensing gas usage, so I’m building a basic sensor to measure it. Luckily, most gas meters have an LED pulse which goes off every so often and we can measure that. The one in my house informs me the light will pulse every 10 dm^3 of gas, so every time the second number from the right on the meter ticks the light will pulse. Failing this, the 0 on the right most dial is very reflective, and an LED can be used to reflect light from it.

As the extra bits I need to finish the sensor hasn’t come yet, namely the SD card shield and a battery pack, I can’t lock the sensor in my gas cupboard and give it a test, so for the time being I’m turning off my computer screens and lights, then flashing a red head torch at it to simulate a pulse.

Code

After a bit of internet trawling, I found an ace site laying down how to do the wiring for a digital light sensor (http://www.ladyada.net/learn/sensors/cds.html#bonus_reading_photocells_without_analog_pins [This link seems to have died since the post was written, the circuit diagram will be linked in at the end of the post.]). From that, I added in a method which took a rolling average of the last 5 measurements and outputted if the measured light was brighter than 110% of the average (not exactly statistical, but I intend for these to be locked in a dark box…).

Here’s what I have so far:

/* Based on: Photocell simple testing sketch. 
Connect one end of photocell to power, the other end to pin 2.
Then connect one end of a 0.1uF capacitor from pin 2 to ground 
For more information see www.ladyada.net/learn/sensors/cds.html */

// Number of measurements for averaging
const int AVERAGE_LENGTH = 5;

int photocellPin = 2;     // the LDR and cap are connected to pin2
int photocellReading;     // the digital reading
int ledPin = 13;    // you can just use the 'built in' LED

// for the circular buffer
int lastReadings[AVERAGE_LENGTH];
int average = 0;
int counter = 0;

void setup(void) {
  // We'll send debugging information via the Serial monitor
  Serial.begin(9600);

  // Initialise the array 
  for(int i=0;i<AVERAGE_LENGTH;i++) {
    lastReadings[i] = 0;
  }   
}

void loop(void) {
  // read the resistor using the RCtime technique
  photocellReading = RCtime(photocellPin);

  // Calculate rolling average
  ++counter %= AVERAGE_LENGTH;
  lastReadings[counter] = photocellReading;
  calcStats();
  int bound = average - (average/10);

  // Glorious debug
  Serial.print("Av = ");
  Serial.print(average);
  Serial.print(" = [");
  for(int i=0; i<AVERAGE_LENGTH;i++) {
    Serial.print(lastReadings[i]);
    Serial.print(",");
  }
  Serial.print("] ");
  if (photocellReading < bound) {
    Serial.print(" . RCtime reading = ");
    Serial.println(photocellReading);     // the raw analog reading
  } else {
    Serial.println();
  }

  delay(100);
}

// Calculates the new mean based on the last 20 measurements 
int calcStats() {

  // average
  average = 0;
  for(int i=0;i<AVERAGE_LENGTH;i++) {
    average += lastReadings[i];
  }
  average /= AVERAGE_LENGTH;
}

// Uses a digital pin to measure a resistor (like an FSR or photocell!)
// We do this by having the resistor feed current into a capacitor and
// counting how long it takes to get to Vcc/2 (for most arduinos, thats 2.5V)
int RCtime(int RCpin) {
  int reading = 0;  // start with 0

  // set the pin to an output and pull to LOW (ground)
  pinMode(RCpin, OUTPUT);
  digitalWrite(RCpin, LOW);

  // Now set the pin to an input and...
  pinMode(RCpin, INPUT);
  while (digitalRead(RCpin) == LOW) { // count how long it takes to rise up to HIGH
    reading++;      // increment to keep track of time 

    if (reading == 30000) {
      // if we got this far, the resistance is so high
      // its likely that nothing is connected! 
      break;           // leave the loop
    }
  }
  // OK either we maxed out at 30000 or hopefully got a reading, return the count

  return reading;
}

 

Next Steps

Well apart from the hardware I need, so issues need to be addressed which may or may not require extra hardware – as I’ve just thought of them.

  • DateTime equivalent object for when I register a pulse
  • Work out how long these will last on battery
  • Can I set an interrupt to go off when a digital value hits a threshold? Or does this require analogue input? If I can it would massively save on battery as no polling! But, it may require fiddly per-house calibration, which the brute force method ignores
  • Laser/3d Printed box and some form of mounting which will let me attach to anything. Probably going to be velcro.

Here’s a video of it working:

JSXGraph

I found this cool graphing JS library which has a wordpress plugin! It’s called JSXGraph and is rather nifty!. Here is an example graphs showing Facebook users over time.


var brd = JXG.JSXGraph.initBoard('box',
            {boundingbox: [-50, 900, 3000, -150], 
             keepaspectratio:true, 
             axis:true, 
             grid:0, 
             showNavigation:false});

brd.suspendUpdate();

//Points for graph
var p = [];
  p[0] = brd.create('point', [0,0], {style:6,name:""});
  p[1] = brd.create('point', [1665,100], {style:6,name:"100"});
  p[2] = brd.create('point', [1890,200], {style:6,name:"200"});
  p[3] = brd.create('point', [2050,300], {style:6,name:"300"});
  p[4] = brd.create('point', [2193,400], {style:6,name:"400"});
  p[5] = brd.create('point', [2359,500], {style:6,name:"500"});
  p[6] = brd.create('point', [2527,600], {style:6,name:"600"});
  p[7] = brd.create('point', [2672,700], {style:6,name:"700"});
  p[8] = brd.create('point', [2787,800], {style:6,name:"800"});

//Line
var graph = brd.create('curve', 
              brd.neville(p),
              {strokeColor:'red',
               strokeWidth:5,
               strokeOpacity:0.5});

//Labels
xtxt = brd.create('text',[1400,-110, 'Days Online'], {fontSize:12});
ytxt = brd.create('text',[10,850, 'Millions of users'], {fontSize:12});

brd.unsuspendUpdate();

Here’s the code:


<jsxgraph width="600" height="200" box="box">
var brd = JXG.JSXGraph.initBoard('box',
            {boundingbox: [-50, 900, 3000, -150], 
             keepaspectratio:true, 
             axis:true, 
             grid:0, 
             showNavigation:false});

brd.suspendUpdate();

//Points for graph
var p = [];
  p[0] = brd.create('point', [0,0], {style:6,name:""});
  p[1] = brd.create('point', [1665,100], {style:6,name:"100"});
  p[2] = brd.create('point', [1890,200], {style:6,name:"200"});
  p[3] = brd.create('point', [2050,300], {style:6,name:"300"});
  p[4] = brd.create('point', [2193,400], {style:6,name:"400"});
  p[5] = brd.create('point', [2359,500], {style:6,name:"500"});
  p[6] = brd.create('point', [2527,600], {style:6,name:"600"});
  p[7] = brd.create('point', [2672,700], {style:6,name:"700"});
  p[8] = brd.create('point', [2787,800], {style:6,name:"800"});

//Line
var graph = brd.create('curve', 
              brd.neville(p),
              {strokeColor:'red',
               strokeWidth:5,
               strokeOpacity:0.5});

//Labels
xtxt = brd.create('text',[1400,-110, 'Days Online'], {fontSize:12});
ytxt = brd.create('text',[10,850, 'Millions of users'], {fontSize:12});

brd.unsuspendUpdate();
</jsxgraph>

Programming languages course

So I ran a talk on learning programming languages last week. It was the second time I had done that particular talk, and in this case the hardware setup went smoothly – as it was done by Stephen Wattam the CSLU VP.

We had a pretty good turn out, mostly of year year undergraduate students who so far had only played with a little C. I took pictures of everyone hard at work doing their task … well, ok. They were mostly on tryruby.org which was even better.

It showed that they had an interest in a new language which is fairly good at prototyping and will allow them to try out their ideas fast. I may have semi pushed them on to it in my talk, so I’m glad they were listening. No one tried learning Haskell though, but then I’ll drop in on everyone next term and see how they are doing. The slides and some info for my talk can be found  at the CSLU site.

On a side note … My Instagram tshirt came! I’m not sure if I’ll ever wear it outside as it’s a bit long, but still!

Rain – Agent-based water

Well I’ve been wanting to do this for a while now, and on Sunday with a freshly installed (and therefore speedy) net book under my belt, I thought I’d have a crack at it.

Last year (maybe even 2 years now) ago, I made a weak plasmoid generator with the intention of using it for terrain generation. I’ve always wanted to use that library for some agent based programming, and a simple (rule wise) example of it would be water. You put some water agents on the map they move as low as they can go, then evaporate. This kind of does that, and definitely suffers from “proof of concept” syndrome. Water moves, but to do anything fancy will require redoing, which I’ll probably end up doing on my next free Sunday.

So,  using a library called Gosu to handle the drawing and the event loop, and a library called TexPlay which allowed me to modify pixels, I got a render up and running which displayed a map of tiles (1 pixel tile 😉 ), and the colour was defined with a lambda that was passed to all of them.

As I’m learning Haskell at the moment, I thought I’d give some lambdas a go, and it made it really easy actually.

There are two types of agents in this program,  Rain and Sources.

  • Rain just flows to a low point.
  • Sources make Rains.

Rains become sources if they hit a low point, which basically has the effect of stacking the Rains that have pooled there so they can make lakes. As Rains are destroyed when they stop, and Sources can only produce a finite amount of Rains based on how many are there when it is made, the system sort of stays constant. Initial sources are given enough Rains to cover the whole map 1 deep.

That is awkward to explain.

Most of the issues with this program was making the renderer fast enough to work on a net book, and as I’m not a graphics man, I made many rooky mistakes.

This is fairly mesmirizing to watch, and I’ll definitely improve it further, by:

  • Making the map colouring sample from colour->height table
  • Have water level as a tile attribute to make things cleaner
  • Fix evaporation
  • Make it so that tiles which have a constant flow of agents over them are distinguished from one-off “rain” paths
  • Fix Rain

It’s a project, feel free to fork from git hub here at https://github.com/carl-ellis/Rain

 

Old school screen capture.

Video here:  Rain – agent-based

 

4 weeks in corporate research – initial thoughts

So I’ve spent the last 4 weeks in Cambridge working as an intern at Microsoft Research and I thought I’d share my observations on the differences between academia and corporate research.

Academia, I find, is far from the ivory tower that it once was. Forgetting the worrying need to find economic benefit for projects, most research is now being spun as a product.

Surely the last thing you want for a product is a buggy bloated research prototype, and surely the last thing you want for a research project is a polished product. I mean you want it for one thing, to prove a hypothesis for your thesis.

This of course, is a massive generalisation, and more applied to the recent batch of Ph.Ds coming through, especially as they come through doctoral training schemes which mesh (mostly unsuccessfully) different fields together. Still, scoring a blue-skies research project without lying through your teeth in the impact section of a proposal is like finding real ale in Essex.

Of course, there is the positive side of academia too. The freedom to tackle your problem via any means. Flexible working hours (unless you are an RA), flexible supervision, flexible scope. You can produce a highly polished massively overworked Ph.D, or the bare minimum which gets the job done. It is a very personal thing. Research projects are a bit more managed, you have a more rigid supervisory system, project meetings, but your section of stuff is pretty much yours to do as you will.

This environment breeds two types of people: the successful ones who generally ask for and give help to their peers, accept criticism with grace, and who thrive in a space where they make the rules; and the other ones who, having seen the gaping ravine of work in front of them, bottle it and fail. Maybe not straight away nor suddenly, as it could creep up after a year or two, but Ph.Ds have been known to just disappear into industry after 4 years, with not a word to anyone. It is very easy to lose sight of where you are aiming to get to, reaching a false summit of your thesis and calling it done.

Academia is very much a dog eat dog world. The UK has a much nicer tenure-free environment, but even the tenant of the American “publish or perish” culture still exists. Academics live off their reputation, and their reputation is written in the black ink of a bibliography.

Corporate research is exactly the same landscape but with a few key differences.

For a start, the “build a prototype” message is very clear, especially for systems which may one day be products. You are building and evaluating a proof of concept, as it should be.

Secondly, the atmosphere is completely different. Whereas in the academic environment it is almost taboo to ask on a struggling Ph.D how their work is going, in corporate research struggling researchers are actively propped up and discussions at lunch and the pub are refreshingly problem orientated.

Thirdly, your supervisor is your manager. Which from a managerial point of view is awesome, you have someone who is your boss and *knows* what they are talking about, whilst still being your supervisor and knowing all the issues that come from research and how best to stimulate ideas out of dead ends. From an intern perspective this is also good, as seeing your supervisor as your boss makes you want to impress them more, and meet deadlines days earlier.

Finally, the pay is miles better.

Those are the good bits, and of course, there are some bad bits too.

Corporate research labs tend to have a “eat your own dog food” policy, which means that if the company creates a tool that can do you job, you use it, unless you can find a valid research reason not to. Working at Microsoft and being a Linux user, you can see how this has led to initial slow productivity as I’ve readjusted to an alien tool-chain.

There are also some scary law type things which get attached to the job, such as losing a kidney if I speak of what I see on whiteboards and such. However, this style of development is slowly losing ground as projects like Gadgeteer are being released under an Apache licence.

As a final point, having worked in some small companies where you have the “family” feel, I still find that you get this here. It may be due to the organisation of the research lab, but everyone is very friendly and you associate with your research group quite strongly. But not in a “compete against other group” way, as everyone in the building is amazingly friendly.

So far I’m enjoying it, we’ll see if I still do in 8 weeks time 😉

C

Installing WP on arch, and migrating from blogger

So I’ve migrated my blog from blogger to wordpress, with the advent of google+ this could have been a premature move, but wordpress is just *nicer*.

Some major points about this migration.

  1. From Google’s servers to my own
  2. Want to have support for multiple wordpresses
  3. WordPress gets things via FTP (eurgh)

So, point 1 and 2.

I made a directory in /srv for the numerous wordpresses, and then created a mysql database ready for the blogs (WP lets you have multiple blogs on the same database, by having different prefixes). Due to wanting to have multiple users and having the FTP features, I decided that this prefix would define the internal blog name. So for example, lets make a blog with the prefix ex.

  1. create the directory exwordpress
  2. make sure the directory is owned by the http
  3. set permissions to 775, via sudo chmod -R 775 .
  4. grab the wordpress tarball and extract
  5. configure any traffic for your blog domain to go to /srv/[wordpress directory]/exwordpress/

Load up your page, and configure the wordpress to point to your database, and voila, your basic wordpress set up is done.

Now, I wanted to import my blogger content, so on the dashboard, tools, import, blogger … ahh I need to install a plug-in. Oh, it needs ftp access to my server …

On to point 3

I used vsFTP, which required some fiddling with PAM. There is a sample config on the wiki page which works out of the box. If you want to test just ftp to your server using your virtual user credentials and try and create a temporary directory. If you can, job done.

So, I finally get the blogger content imported, which is fine, but for a few minor issues.

  • Every title, and the content, is preceded by a single “>”
    • Hey, if it is open source, I’ll see if I can find a fix …
  • tags are converted to categories
    • Which isn’t that much of an issue with the tag<->category converter

So, conversion done, just a pity that the only way to fix the conversion bug was to manually edit my posts.