Sunday, July 30, 2006

Simcore

To update the work on what is currently being called simcore- simply because that's what the folder storing it is called- I've basically got the 0th iteration, Punt, working fine. The reason it's called punt? It just randomly generates the signal strength seen at each location along the path, because it has no data to look back on and make guesses with.

The 1st iteration, currently called distance but looking for a better name, deals with what I touched on in my Project Iteration I post. If you have past observations which detail signal strength at particular locations, you can then say- "Before, when I was x metres away from an access point I got an average RSSI of r. Thus, for any time in the future where I am x metres from an access point, I will have a RSSI of r."

There's currently just a couple of I/O and storage issues, but hopefully that will be up and working in a day or two and I can look at starting on "pass through walls" or somesuch. Alternatively, I could actually start comparing a real walk with Punt and Distance simulations. To do so, I'll need some architect floorplans of UWA buildings, so I can use the latest version of wifiplot with them. In any case, plenty of work ahead.

Thursday, July 20, 2006

Project Iteration IV: Path Loss Model

So seeing as I'm currently looking at ways of filling in all this white space, let's have a look at a more traditional, technical, mathematical (eek!) approach called Path Loss Modelling.

Log Distance Path Loss modelling is used extensively in the field, with the following basic equation:

PL(dB) = PL(d0) + 10*n*log(d/d0)

where:
n = path loss exponent
d0 = close-in reference distance
d = distance between transmitter and receiver

Right now, I assume d0 to be the distance between the access point and say, the nearest wall. But this I'll have to confirm. The other problem is the fact that this method uses dB as a metric, where so far I have used RSSI values (Atheros, PLEASE send me the RSSI_Max values for the MacBook Pro's Airport Extreme card!) which may require some sort of conversion, or standardisation, or something.

Other than that, it's a proven and commonly used method, which may just help to fill in the blanks.

Friday, July 07, 2006

Project Iteration III: Fill in the gaps

Remember colour-by-numbers? Well, that's similar to this. Sort of. Ok, maybe not, but it's a fond childhood memory. Anyway...

So in the previous two posts, we've seen distance, and situation-based approaches. Now it's time to look at something a little more abstract. Imagine we were to take a large amount of readings around the university- it's not hard to imagine a map looking something like this:
Right. Now, due to the sheer number of readings presented above, let's not bother to try and figure out patterns or distances. Let's just consider it from a simple statistical standpoint. It would be easy enough to say, "if a point has a minimum of 3 surrounding points with readings already known, the current point will have a signal strength of Point1+Point2+...+Pointn divided by n". Applying this approach:
Well well well, we have full coverage all of a sudden. That's the first big advantage we see here. However, how much of that is due to the method, and how much is due to the sheer number of readings initially collected? This of course won't be known until we look at this method and the other two with the same data points. In addition, how sure can we be of the accuracy of estimating points from other estimated points? There certainly are some odd looking readings on the above chart.

  • (8,7) has a signal strength of 4, yet one square later has dropped to 1?
  • (6,8) to (7,8) actually increases in signal strength?
  • What's the deal with the top right access point? It's surrounded by 9s, and yet others have 10 all around.
And so forth. Still, the idea is there, and it shall be interesting to see if I can combine it with the other two in any way to increase my accuracy.

Project Iteration II: Object pattern matching. Sort of.

So, from my first post, we all know our imaginary sector of the university. Now let's imagine we thought about the measurements we take differently. Instead of randomly taking readings here and there, and then working out distance and then signal estimates, let's think with a little more forethought. Perhaps, just perhaps, we could look at points of this sector where a particular transition occurs. Like a signal passing through a wall, or a tree, or around a corner. With this in mind, we can take less readings, and simply reapply the logic for each situation it occurs in in the future- passing through a wall should give a common decrease in signal, shouldn't it?

With this in mind:
Seems a great idea, doesn't it? Means we don't have to take a bucketload of readings, right? Well, let's have a look:
As we can see, we've hit a snag. Rather, we've hit several:
  • We know what signal strength is decreased by when it passes through walls, but we don't actually know what it is at the wall itself. This alone isn't such a big issue- after all, who is going to use their laptop in a wall? (Go on then, prove me wrong.)
  • The biggest issue here is the bottom right hand area, which has an abundance of "?"s, which represent an unknown signal strength. Why? Well, there's a few things here. The first is a matter of precedence. We know that supposedly signal decreases by one over each square it travels in air (6-5-4), but what happens when we get to the corners of the building at (3,4)? Note here we have readings of 8, 8, 8, 7 and 5 around it. Which of these do we minus one from? Further more, look at (5,8). The first thing we think is, "ah- this will be 7, it's one square away from 8". But wait, it's behind a tree, so do we apply the "minus whatever when we go through a tree" rule? Indeed, what does "through a tree" mean? Does it have to be north-to-south as measured originally?
All of a sudden, this approach has become very messy. Perhaps, we could adapt an approach that aggregates the surrounding squares, giving us our answer. We will explore an approach like this in Iteration three (keep in mind my project iterations won't neccessarily follow this order, or even feature some of these ideas, I'm just throwing some ideas against a wall and seeing what bounces). For the time being, I like the approach presented here- but it needs to be replicated more. I need more than one reading for "through a tree", and I need to cover other situations like "round a corner" to be sure of full coverage. I also need to work out this problem of precedence. Even so, it's a start.

Monday, July 03, 2006

Project Iteration I: The most basic of the basic

This is the first post in what will be a series about my thinkings for my project, and how I could go about estimating signal strength from captured data.

So, say we capture points around uni such that:Whereby the orange represents a building, the green is trees, red squares are access points and yellow circles locations where I have taken a reading (and their relative signal strength).

Now say that I wished to "fill the grid" as such, working out what the signal strength would be at each point on this grid. The most basic way to do this would be to observe signal strength as a simple function of distance from an access point, observing the distance at each access point, and for each location with a distance the same as an observation, give it the same signal strength as the observation.

The second table above works out the distance from each point to its nearest access point. Then we simply fill in the top table by copying the signal strength observed at the distances 1, 1.41, 2.42, etc. Unfortunately, this example exposes flaws in this very basic method- such as the large gaps of whitespace where no signal strength reading is recorded for this distance.

An improvement could possibly involve the "guessing" of whitespace squares based on the signal strength observed around them- however this would fall down in the area around the trees where very little readings are seen. For this method to be anywhere near succesful, many more readings would be needed than what this example presents.