FROM THE BLOG

Data bias

Researchers use plenty of statistical methods and mathematics, in isolation or combination, to turn data into a prediction.  What starts as collection, quickly turns to analysis.  The inevitable “what can we do with it?” and “why are we doing it?”

Ironically a little data coding can answer these questions.

<formula>

<Dataset1 = N+1)

Ask

Do you know why you are doing this?

If Yes, goto <code1>

If No, Stop <end>

<code1>

dir <b>

Do you know what to do with the information?

If yes, goto <code2>

If No, stop <end>

<code2>

dir <b>

Display results for dataset(N+1)

<goto formula>

Usually you have to go through the process four or Five Times before you asked why enough to get to a credible answer.  A technique originally developed by Sakichi Toyoda.  Name sound familiar? The process was included with the Toyota Motor Coproration during the evolution of its manufacturing methodologies.  The beauty lies in the simplicity.  It has no bias correlation.  Data bias has the capacity to enrich or destroy entire libraries of information related to your research.  The road to discovery is almost always led by un-corrupted data delivery methods.  i.e. no human interference to slant it one direction or another.  If you ask your data, “Hey data, I want you to do “this” for me.” Ultimately it can be geared towards a bias to collect the information in a particular way to suit your need.  However, this isn’t truly accurate.

Research begins with a hypothesis, not an answer.  It is easy to confuse the two, especially when you are desperate for an answer.  Hypothesis should lead to a tenable theory, which should lead to an answer or another question.  The conceptual framework is the analytic tool used to make distinctions and organize ideas to get to the results. Read “Good to Great” by Jim Collins describing the differences between a Hedgehog and Fox as a way to organize a principle to view the world.  Once you have the framework in place, you can begin to set categories of results in place.  This will help define the ultimate goals of your data collecting.

big-data-next-big-revolution

For Data Streams, that is a series of building related data sets that will help design professionals determine outcomes before they happen and if caught early enough, corrected before the final design goes out.  Imagine if you will, a simple rectangular office layout.  Cubes on one side, big layout table in the center and project viewing wall on the other.  The design team promotes that clients should walk on the viewing wall and staff should walk on the cube side.  The center table divides the room equally in half.  Which way do I turn if I’m the client?

There is no realistic answer to this because of so many variables.  Personal preferences, time of day, mood, level of interest, and so on.  Let’s try to measure what we can do to change that outcome.  Could we simulate a different organization of the room, by say, moving the center table slightly closer to the cubicals? In theory, that SHOULD define the space so that clients know that the wider side is for them. Is that what will happen? What influences their decisions and what happens when they make it through the end of the table, stop at a cubicle and decide to walk out of the office on the now narrower cubicle side?

Yes, that takes time.  But if you don’t find time to do it right the first time, when will you find time to do it over again?

The video below demonstrates multi-point infrared motion tracking connected to a computer that allows for deliberate motion sensing of occupants.  In this instance I’m experimenting with the Leap Motion controller.

 

 

Comments are closed.

Social Media

Stay up-to-date with our latest findings and progress