FieldKit Logo

FieldKit(r) title text



for researchers

Frequently Asked Questions

(Always under construction)

This FAQ page will be continuously updated as new queries come in and new issues arise.  Check back with this page periodically. 


General issue:
          Why FieldKit

Pre-Field issues

Attention Studies
Question Construction
Interactive Form Design
Interface Appearance

In-Field issues

Automatic data backups

Post-Field Issues

How to code behaviors
Producing tables



Why FieldKit®?

Q: What can I use FieldKit® for?

A1: Data collection.   It has unique tools for in-person interviewing, video- and audio- recording, on-site observations, paper-pencil studies with keypunching, computer-user monitoring and central-location testing of many sorts.

 A2: Coding, analysis and reporting It's simple-to-use but powerful post-field tools can be used on all kinds of research data – whether collected by the FieldKit® or acquired from other sources.

Q: Why would I use FieldKit® analytics if I'm already skilled at SPSS or another high-end statistical package?

A1: Maybe you shouldn’t. There are many terrific packages for people with high-level skills – packages that can do much deeper and more complex statistics and charting.  And people with programming skill can certainly create their own analytic programs.

 A2: But the FieldKit might save you money, and it will probably save you time.

 If you don’t already own these programs, or have subscriptions to use them, FieldKit will definitely save you money - a great deal of it.

 And if your skills at these other programs are modest, you will find that the FieldKit is vastly easier to learn. The hardest thing about FieldKit is learning what it can do … not in learning how to do it.

Q: What does it offer me that Survey Monkey, Survey Gizmo, Zoomerang and other web-survey services do not?

These are terrific services – I have used them myself and will happily do so again.  But web surveys are not the best tools for every kind of research.  And the general-purpose analytics that serve the average web-survey project are not the best tools for every kind of analysis.

 A1:   Different kinds of data collection. FieldKit® offers a number of data-collection methods that the on-line services are not set up for, Including moderated interviews, paper-pencil surveys, focus groups, ethnographic research, on-site, behavioral observations, web-site user-reaction studies, interactive surveys requiring a broader range of conditional actions, and studies where the researcher needs more control over the user-interface.

 A2:  Different kinds of analysis. The FieldKit® offers a number of analytic tools that they don’t provide: like sophisticated content analysis, video analysis, differential word-use counting, rapid verbatim-collating, powerful and efficient coding interfaces,  one-step inter-coder reliability reporting and analysis,  integrated and synchronized video controllers, and a platform for doing grounded theory analysis projects on quantitative and qualitative data.

 A3. Speed, ease of use: If you have done a web survey through a commercial service and would like a quick, easy way to analyze and prepare reports on your data, the FieldKit can import that file, reconfigure it automatically, and provide you with its full range of easy-to-use content-coding, analytic, graphing and reporting tools.





Q: When should attention to a stimulus (like a TV show) be used instead of eye movement tracking?

A: When you want to find out whether it is engaging, relative to others --- as opposed to finding out what points or objects within the stimulus frame most draw the viewers' gaze - which is what eye-tracking measures.  Both give potentially very useful information - but of very different utility

There is a tendency, in Western cultures, to look at ever smaller pieces when trying to explain why more general things happen - but sometimes the causal connections get lost.  If you want to find out whether a program has the power to keep an audience engaged, measure whether people spontaneously attend to it in a setting that doesn't force them to attend, and study those moments when viewers are won or lost.  If you want to find out what part of a print ad its readers look at first and whether they ever look at the product in the picture, eye-tracking might be the best approach (but be careful not to uses test-readers who would not have bothered to look at it in the first place - what these people's eyes fix on might be very different from what self-selected readers would look at)

Q:When should distractors be used?

A1: When real-world viewing has distractions.
Home TV viewing, for example, is often full of distractions and parallel activities. Studies have shown the average person actually looks at the average broadcast TV show for only 70 to 75 percent of the time they are in the room.

A2: When the research is formative.  
When the primary purpose of the research is to discriminate between elements WITHIN the stimulus, it is helpful to use distractors – especially ones which give respondents plenty of opportunities to keep checking out the test stimuli.  Without an alternative to shift their gaze to, disengaged viewers tend to stare vacantly forward, giving no overt sign that they are bored – even when they are.

A3: When scores are to be compared with other tests.
Distractors provide a greater measure of control over the viewing environment.  This standardization is especially helpful when scores are to be compared across different tests and different stimuli, or when a body of norms is being built.

Q: What kinds of distractors work best?

A: It depends. Ask yourself these questions:

1) What are the real-world distractions like, psychologically, and what kinds of controlled distractions might exert a similar level, or "press" on a viewer or listener? Think about both external distractions (other people, other media in the space) and internal distractions (daydreaming etc) in your considerations.

 In an ideal world, you would do on-site observational studies to establish baseline levels for the behavior and media you wish to predict and documenting the kinds of distractions that naturally occur. You would follow that by a series of "distractor design" studies to develop an optimized distractor protocol for prediction for the populations and circumstances you are trying to understand. 

In our own work, we have been well-served by using a collection of still pictures grabbed from TV shows known to be appealing to the target population. They are run as a random-order slide show, each "slide" showing for 5 to 7 seconds.

 2) What are your research priorities? Formative information to guide creative copy development?  If so, use distractors that have no risk of not letting go of a viewer once they are tuned into.  Distracting stimuli like parallel stories or puzzles or games will show you the first turnaway point in your copy but will be very insensitive about revealing any fundamentally engaging material thereafter. But if your primary need is summative, to predict actual real-world audience numbers or behaviors, then using distracting stimuli with more "stickiness" (like an alternate TV show they might tune away to) might serve better.   



 Q: How should I place questions on an electronic form?

A1: Predictably.  
The more easily people grasp how to answer each question and the less time they spend figuring out where to go and how to get there, the more they will enjoy the experience, the quicker they will complete it and the fewer who will quit before finishing. Try to keep similar styles of questions together and aligned with each other, for example.

 A2: Efficiently.
Try to minimize mouse movements back and forth and around the page. That is one of reasons that FieldKit places the Next-page and Previous-page buttons on the bottom of each page: because that is where the respondent's mouse is when they finish answering the questions on the page.  It is also one reason we try to keep answers in single column on a page: so respondents just start at the top and go straight down. They never get lost and seldom miss a question that way.

 A3: Sparely.
Don't try to cram a lot on each page. Use a lot of white space. A 10-page survey with 6 questions on each and lots of space feels a lot  shorter than a 5-pager with 12 questions per page. Ever thumb through a James Patterson book? It works brilliantly for him.

Q: Any advice on how to word questions?

A1: Make them easy to follow. 
Use clear words, short sentences and large type -- as much as you can.

A2: Avoid compound questions.
Each question should identify one single issue and only one. Otherwise some respondents will give answers to one while others will answer the other. And your results will be impossible to interpret.

A3: Be interesting.
People love answering questions about things that they know about and things that interest them or things they can talk with other people about. They also like feeling that they are helping you learn something important to the both of you. And they like learning things that may be useful to them in some way.

A4: Set a context for the questions.
Try to frame a scene or a need-state in which respondents can imagine themselves before responding to your questions.  It will make things much easier for them - and fun, and meaningful.

A5: (Usually) Ask questions people can answer without much thought.
Most of the time, if people have to stop and think, then they will be invoking processes they probably don't invoke in guiding real-world behavior. And even if their answers are consistent, there is a high likelihood that they will not be predictive of future thought or behavior in non-questionnaire settings.



 Q: Should I rotate the order in which questions are asked?

A: Yes, in some situations. 
If the questions are similar in style (all check-boxes, say) and they do not follow a logical progression, you should consider rotating them.  Position effects can be very strong: with the first or last items in a series getting systematically different answers from those in the middle.  A good way to control for these effects is to present them in a different sequence to each respondent. 

FieldKit makes it easy to randomly rotate the presentation order of
questions within a page, to rotate the positions of pages within a survey, and to rotate the position of answers in multiple-choice questions.  You can even rotate clusters of questions:  keeping the original sequence of questions within each cluster.

Q: When should I use controllers like skips and jumps?

A: When they save respondents time, effort or confusion.
Skips, jumps and auto-fills keep them from having to wade through questions that don't apply to them or whose answers you are not interested in.  This enables you to ask more questions about things that do matter to you, and makes the experience easier, more personal and pleasanter for your respondents.  Interactive surveys can be much more productive and fun for everyone.



Q: What fonts work best?

A: It depends.
Readability research has found that fonts with serifs
          (here is an example of Times Roman)
are generally easier to read than sans-serif fonts
            (here is an example of Ariel).
But as you can see, it depends a lot on  the font-size, weight, spacing and other attributes.

There is a convention amongst computer programmers and website designers to use sans-serif fonts of one sort or another, and users are getting accustomed to them. 

The first thing to strive for is clarity. When in doubt, test it out.  Show some alternatives to someone and ask them what is easiest to read, clearest, most appealing, and best fits your topic. 

You could even put together a little FieldKit survey with rating questions using different fonts and sizes on different pages to give you direction. If your survey is going to be administered on a handheld device, be sure to bump up the font size a couple levels from the default size..  It will make your questions much easier to read.

 Q: What colors work best for the interface?

A1: It depends. 
If you have a corporate or personal "look" with a unique palette and typeface, by all means stick with it.  There is a great advantage to having respondents feel as if they know where they are, and the colors and layout of a page can do much to enhance their familiarity and comfort level.  If you know talented graphics designers, ask them for their suggestions – or even better, hire them or work out an informal barter: your research help for their consulting, say.

A2: Don’t take the “Color-meaning” research too literally.
In my experience, in the real world, colors have little intrinsic meaning by themselves. Their impact can be huge, but is almost entirely dependent on context and contrast. “Blue” may be reputed to be a solid, comfortable, reliable and cool color.  But it is a pretty bad color for a roast chicken, and if all the other books on a shelf are blue, you probably wouldn’t want to come out with another blue one if you wanted to get noticed.

A2: If you have the time, test out your top choices – with as much realistic context as you can provide.



 Q: Why does the program keep asking me about backing up my data? Why doesn’t it just do it automatically and not bother to ask me?

A: Because most back-up media are removable, so there is no assurance that a given backup destination will be accessible every time a session is run.



Q: The reaction videos are not playing back when I try to code from them. What should I do?

A1: Check for any warning screens alerting you to the source of the problem and follow the instructions. Note – sometimes the warning screens are hidden behind other screens on the desktop. Be sure to check.

We once got a warning that Nero was causing a conflict, and were told to get an update to solve the problem.  We did. It did.


A2: Verify that the files themselves are OK.  Locate them in the project’s \Data\CapturedFiles\ subfolder. Try to play them back with the software that was installed with the webcam.  If that fails, then install any camera driver updates that may be available and try again, or uninstall the camera’ software, though the Windows Control Panel, and reinstall it.

Q:  Can I set other keys to do the scoring?  I find the right and left carat buttons, [< ] and [ >], are confusing for the rating the behaviors being scored in my project

A: Yes.  The plus and minus keys are already programmed to record positive and negative scores, respectively.  If you use these keys for coding attention, the issue of “right” versus “left” is moot – so you can ignore the direction-of-gaze setting.


Q: What should I do if the data and headers in cross-tab tables appear to have misaligned columns?

What has probably happened is that one or more entries in the table were wider than tab settings allowed, which pushed the all the cells to the right over into the next column in the computer window. It will not affect their location in a spreadsheet program.

A1: Adjust the tab settings
Each tab setting is shown by the 'L'  in the strip at the top of the table. Drag the 'L' to the right until all the text snaps into place.

A2: Load the text into a 3rd-party program for editing.
You can use the Save-to-Excel button to save it to Excel in one step - provided there are no more than 255 columns (Excel's limit)

If you start the 3rd-party program first, you can also cut and paste the data to it from the FieldKit® screen, or use the other program’s ‘Open’ menu to load the table from the project’s \Reports\Tables\  folder.  The rich-text (.rtf) version of the table file preserves the tab settings and font formatting for programs that can handle them. The ascii/plain-text version (.txt) is readable by a wider range of programs, including most data-analysis and graphing applications.