Android Maps and Routing

Very quick one here.

I’ve been trying to mapping, especially routing, working on an android application I’m developing. I will save you a lot of trouble and tell you to use the inbuilt Google services.

In fact, I found a gem of a post at http://smartandroidians.blogspot.co.uk/2010/06/showing-route-through-google-map-in.html which shows you how to open an intent for routing. I’ll add in some handy extras.

// Uri to open - this will work in your browser and is actually the uri that maps generates itself
Uri uri = Uri.parse("http://maps.google.com/maps?&saddr=" + userLocOverlayItem.routableAddress() + 
                    "&daddr=" + destinationOverlayItem.routableAddress() + 
                    "&dirflg=w");

// Create an intent to view the URI
Intent intent = new Intent(Intent.ACTION_VIEW, uri);

// [Bonus] Set the intent class as a map, so you can avoid an annoying pop up asking whether to
// open in the browser or maps
intent.setClassName("com.google.android.apps.maps", "com.google.android.maps.MapsActivity");

// Start the activity and close your launching activity
startActivity(intent);
finish();

I added a couple of things, as my application wanted walking directions and there was an annoying pop up asking what to open the URI with.
First, adding ...&dirflg=w to the URI forced walking directions.
Second, setting the intent’s class name as in the snippet set a hint to android for what to open the URI with [2].

This isn’t new, but it was hard to track down something clear via searching, so I’m consolidating.

References

[1] http://smartandroidians.blogspot.co.uk/2010/06/showing-route-through-google-map-in.html

[2] http://stackoverflow.com/questions/8132069/calling-google-map-using-intent?lq=1

SlimDX and Shaders – Constant Buffers

Setting up

Having played with a few GLSL shaders in C++, I thought that moving to a DirectX/HLSL solution sould be fairly simple.

Creating the most simple pixel shader was easy enough, and SlimDX is a decent wrapper for DX in C# – so after an hour or so I had the standard working triangle floating in space. I wanted to go further (obviously), and having spent the last few days delving into the demoscene and finding the most amazing site for pixelshader techniques (http://www.iquilezles.org/www/index.htm) I wanted to have a shader to work with (x,y) coordinates that were between (-1,1) rather than absolute screen coordinates.

This requires passing the current render area width and height to the shader to normalise with. This is done using Constant Buffers and on the shader side looks like:

 

cbuffer ConstBuffer : register(c0)
{
	float2 resolution;
}

// Simple vertex shader
float4 VShader(float4 position : POSITION) : SV_POSITION
{
	return position;
}

// Pixel shader
float4 PShader(float4 position : SV_POSITION) : SV_Target
{
        // Get normalised (x,y)
	float2 p = -1 +  2 * (position.xy/resolution);

        // Spikes in the corner
	float c = abs(p.x*p.y);
	return float4(c,c,c,1.0f);
}

 

To get the resolution variable into the register for the shader to use, you must create a ConstantBuffer in the program code and assign it to the shade. The Constant Buffer must be a size which is divisible by 16, so if your data is too small, just put it in a bigger buffer.

// Create data stream, we only need 8 bytes, but round up to 16
var resolution = new DataStream(16, true, true);
// Fill the stream with width/height info - I'm using a renderform
resolution.Write(new Vector2(form.ClientSize.Width, form.ClientSize.Height));
// Rewind the stream
resolution.Position = 0;
// Create and bind a buffer
context.PixelShader.SetConstantBuffer(new Buffer(device,     //Device
                                                 resolution, //Stream
                                                 16,         // Size
                                                 // Flags
                                                 ResourceUsage.Dynamic,
                                                 BindFlags.ConstantBuffer,
                                                 CpuAccessFlags.Write,
                                                 ResourceOptionFlags.None,
                                                 4),
                                      0); // Register

This lets us run the above shader, giving us:

Simple Shader results

 

Bonus

Having played on shader toy, I repurposed the Deform effect for a static image.

Shader:

cbuffer ConstBuffer : register(c0)
{
	float2 resolution;
}

// Simple vertex shader
float4 VShader(float4 position : POSITION) : SV_POSITION
{
	return position;
}

// Pixel shader
float4 PShader(float4 position : SV_POSITION) : SV_Target
{
        // Get normalised (x,y)
	float2 p = -1 +  2 * (position.xy/resolution);

        // Deform focus
	float2 m = float2(0.2f, 0.1f);

        // Deform
	float a1 = atan((p.y-m.y)/(p.x-m.x));
	float r1 = sqrt(dot(p-m,p-m));
	float a2 = atan((p.y+m.y)/(p.x+m.x));
	float r2 = sqrt(dot(p+m,p+m));

	float2 uv;
	uv.x = 0.2 + (r1-r2)*0.25;
	uv.y = sin(2.0*(a1-a2));

	float w = r1*r2*0.8;
	float c2 = abs(uv.x*uv.y);
	float4 col = float4(c2,c2,c2,1.0f);

	return float4(col.xyz/(0.1+w),1.0f);
}

Result:
Deform shader

Android development talk

On Thursday 10th May, I was asked to present a small talk on android development. I have not been coding android for very long, but I had learned enough to get background services working and activities not popping up where they should be. The slides won’t be as complete without my talk, but they should get the idea across.

The following is a very light overview of the tools at your disposal:

Viral Marketing

I’ve been slowly writing up a skills bank for the Lancaster Award and while finding something for organisation I came across the old working documents for a viral marketing campaign I ran in my first intro week as president of the Computing Society.

I was trying to attract more people to the academic side so I made a simple challenge that went up on posters around campus. The poster that was put up can be found here.

This led to a landing page which sadly is no longer available at the original URL as I no longer control the domain, however I have put it back up here (for those who wish to give the challenge a go, change the domain for http://cslu.co.uk to http://109.74.204.44).

I think overall we had 5 people email in and solve it, which wasn’t very many – but I remember there was a lot of talk wondering what the posters were about, and as they had our logo on them a fair amount of people turned up to the first meeting.

I think if I was to do it again, I wouldn’t say which society it was (there is still a lot of social stigma attached to computing societies) but would just put the date, time, and venue of the first meeting. A few different levels of challenge with prizes would be better too. The challenge was very easy and would fall very quickly when put up against the internet, but a university has a different make-up of population and not everyone who could easily solve mental challenges may want to do so at intro week.

It was quite exciting seeing the posters scattered everywhere though.

Big displays and tabletops – the wrong approach?

The current model

Current research trends are in favour of large public displays, hidden projectors, and table top displays – Microsoft’s Surface being a prominent example along with the many research projects involving public displays.

A lot of these projects have options for multiple users operating the systems, in fact some are made only for collaboration in the workplace (Highwire’s CoffeeTable, and Intel’s What Do You Bring To the Table?). However, when systems are in a public setting, the use becomes more of a public service than a means for collaboration and when the users are acting independently great efforts are made to determine identity and facilitate independent interaction and data presentation.

The classic example of this is where you walk up to a public display in a university and the display shows your time-table.

In the above scenario, what happens when a display is surrounded by more than one person – say 5 people. Does it show 5 individual timetables? If 50 people approach the display does it show 50 timetables? That is impractical, so what is a fair way to display a crowds worth of information to a waiting crowd? It could be shown one at a time, but then who gets to see theirs first, and more importantly how does one recognise your own timetable if you can’t remember it in the first place? Does your name have to appear with the time-table? Will public users be comfortable with this scenario? What authentication methods are used for systems such as these, do users need to opt in, will these questions have the same answers for different displays?

If we ignore other issues except authentication, how does the display know who you are to display the relevant information? Read a unique ID from your phone, a smart card, or some other wireless unique device? How does this deal with phones being sold, or smart cards being lost? What happens if someone of a criminal nature got hold of young student’s ID card and started stalking the student? Is there a line of how public information must be before it appears on a public display? Even if the display is not in a completely public setting, say it is behind the security check point in a company, who decides the amount of information which is displayed on a screen? The user, or the programmer in charge of the presentation software? If authentication is required before information is shown on a display, does this not ruin the workflow of the display – and would this not also stop frantic late users from approaching the system?

With privacy being a current issue within society, do not public displays get relegated to becoming glorified billboards with the only personal information they will be allowed to show will be that which could be found on the public page of a person? Will the only use of these systems be for when one has lost their phone, and the display is currently free?

Currently I believe the only use of these systems will be gained from location-based advertising – where content is changed based on the demographic of people around it and the time of day.

A different model

If we are looking into the future (a trait which many scientists are likely to do) there is another model which is what I believe public interacting displays should be tending towards.

By dissecting the mission of a public display it can be seen that it encompasses two functions.

  1. Being a dedicated geographical point for a certain type of information or request,
  2. Having a method to display feedback to the user.

If we look to the future and imagine a hypothetical piece of hardware exists, we can remove point 2 from the list of needed functions – and remove a lot of privacy issues.

We have this technology – albeit in a crude way – at the moment. A personal screen. Currently, this tend to be a smart phone/device of some sort. These tend to have authentication when they are switched on, via a pass code. They even support banking systems which – one would hope – worry about the security of the device in question.

The limitations of these devices are their size and resolution. A personal screen of 4 inches isn’t brilliant, so let us create a hypothetical product to facilitate this model. Imagine a pair of glasses which could project upon its lenses virtual displays at any arbitrary projection and geometry to simulate real life displays. They could even be simulated on static points in the real world, needing a user to be close to it for it to be used – as in real life. The simulated displays would be displaying what ever the public display wanted to – by virtue of its number 1 function: being a dedicated geographical point for a certain type of information or request.

Let’s go back to the 50 people in front of a timetable billboard. With their own personal screen they would be seeing just their timetable, in an almost completely private setting – while still being surrounded by 49 other people. In fact, if wireless communication density is sufficiently high, it could replace conventional screens on desks, on phones, all together. With regards to public billboards though, advertisers would be able to get what they have always wanted – a message directly to who they want it to go to.

This is of course all conjecture, but I think it should be where the domain should be heading.

Code Golf

CSLU did code golf today. I did 1 and a half tasks, which were:

  1. Output the first 100 prime numbers
  2. Output e to 100 decimal places

Prime numbers

I’m quite proud of this, I managed to do this in 55 characters initially but then after some collaboration with the rest of the club shrunk it down to 49 characters

2.upto(541){|a|i=2;i+=1 while a%i>0;p a if i==a}

e

I never got his fully working as I ended up getting caught up in list comprehensions. Ended up with:

1 + sum [1 / (product [m | m<- [1..n] ]) | n <- [1..300] ]

 

Which is the same as e = sum_{n=0}^{infty } frac{1}{n!} and shows how pretty Haskell is.

Ruby gem – json_serialisable

I was working on a ruby project and stumbed upon my first valid application of metaprogramming. I was creating json serialisation methods and realised they were all practically identical. So I looked at how attr_accessor worked and then wrote my own class method called attr_serialisable. This method generated serialisation methods automatically.

Example

Given a class A

class A
  attr_accessor :a, :b, :c
  attr_serialisable :a, :b, :c

  def initialize(a, b, c)
    @a, @b, @c = a, b, c
  end
end

 

attr_serialisable would generate the following methods:

  def to_json(*a)
    {
      "json_class"  =>  self.class.name,
      "a"           =>  @a,
      "b"           =>  @b,
      "c"           =>  @c
    }.to_json(*a)
  end

  def json_create(o)
    new(*o["a"], *o["b"], *o["c"])
  end

 

Which will allow the class to easily be used by the ‘json’ library.

Links

Rubygems page
Source code

Arduino Gas Sensor – Part 2

Previously

So I left the last article with the following:

Well apart from the hardware I need, so issues need to be addressed which may or may not require extra hardware – as I’ve just thought of them.

  • DateTime equivalent object for when I register a pulse
  • Work out how long these will last on battery
  • Can I set an interrupt to go off when a digital value hits a threshold? Or does this require analogue input? If I can it would massively save on battery as no polling! But, it may require fiddly per-house calibration, which the brute force method ignores
  • Laser/3d Printed box and some form of mounting which will let me attach to anything. Probably going to be velcro

So I’ll go through what has been done since then via this criterea.

Hardware Changes

The DateTime object style thing was achieved through an RTC module (http://www.sparkfun.com/products/99) which communicated to the ‘duino using the I2C bus and takes 5V. A microSD card shield was also added to the hardware for saving events into a simple text file (http://www.sparkfun.com/products/9802).

Rather than use my hastily build photosensor, I used a load of RFXcom sensors as they are well built and have a housing designed for sticking to the meter (Which is by far the biggest engineering challenge of this project). A board layout for interfacing with the sensor units was created and the gEDA/gSchem schematic file can be found on the git hub project page.

Software Changes

Well, apart from the stuff which interfaces with the RTC module and the microSD card, not much has changed code wise. The way the RFXcom modules work was backwards to my prototype, so I measured the time it takes to discharge a capacitor rather than charge. An LED on a meter normally flashes for .1s, so the timout is set to 0.05s.

Battery Life

Using a multimeter showed the whole sensor drew 92mA. It isn’t ideal, but with a 6 AA battery pack which packs ~3000mAH which lasts a bout a day ( 🙁 ) However,  that was with a ridiculously high powered infrared LED on the RFXcom board. Using just the photosensor (rather than the reflective) the power output was 42mA and that was with the SD card always powered. There is a lot of scope for battery life improvement on this project.

Source

The source can be found on my github page at https://github.com/carl-ellis/Arduino-Gas-Sensor .