(these)abilities L.A.B. #2.2: Grab Lab (Series 2 Session 2)

Posted by: theseabilities | in Projects | 2 years, 8 months ago | Comments

 

 

Session 2 is here! Held this past Sunday, subsequent L.A.B. sessions for Grab will be a weekly affair from now till end-may! Just drop us an email at hello@theseabilities if you would like to attend!

Our participants so far have come from various backgrounds like design, engineering, healthcare, accounting, economics, various Disability divisions et al, and each of them have enjoyed the L.A.B. sessions in their own right and are returning session after session! There is something for everyone, but don’t take our word for it:

 

“My boyfriend does Economics, but I swear I have never seen him been so creative and effortless at that! “

Felicia Paul, Industrial Designer & (these)abilities L.A.B. participant

 

Right on!

 

“Grab with a Disability” Simulation:

Picking up where we left off in Session 1, we continued our problem discovery by exploring the Grab App without our sense of sight.

**Using the Grab App without our sense of hearing was not an issue, until we look at the offline interactions of the ride-hailing experience. More details in the next section!

We heard several cries of “I hate VoiceOver!”, “Why is it talking so fast to me?” and “Why they never ask me if I want to pay by cash or credit card??”.

For the uninitiated, screen readers like VoiceOver on the iPhone works like this:

  1. Swipe right or left anywhere on the screen, like with Tinder, and the screen reader will tell you the next, or previous piece of information respectively.
  2. With enough swipes, you get all information from one screen. Now imagine apps with multiple transitions or dynamic features… Headache!!
  3. Once the screen reader reads out a specific piece of information/option, double tap to select or open it.
  4. However, not everything you see on the phone screen may be read out by screen readers. This is because not all parts of the App, when written, is compatible with the screen reader protocol.

 

  

Anyone want to make a guess how many swipes it takes to read both screens? A LOT! At least 35 swipes.

And here are some of the problems we managed to identify:

  • Features boxed in red were not read out by VoiceOver. So those with blindfolds on did not get enough information (mode of payment, when to pick up, notes) to be confident enough to make a booking.
  • Dynamic features like the Google Maps plug-in just read out “Google Maps” for some. Others heard “GrabCar, GrabCar, GrabCar, GrabCar, GrabCar” when the app displayed the number of taxis available on Google Maps.
  • The bottom part we swipe to choose our type of ride was not completely readable because screen readers only read what is on the screen, not what is hidden that we need to swipe to unveil.

Think not just of the difficulty in navigating the App, but the time difference it takes a sighted person and a visually-impaired person to use the App, and how that might affect their journey.

 

Angel/Devil Driver Game:

Next, we wanted to uncover problems not just with using the App (mainly meant for booking your ride), but also the offline interactions like locating your ride, communicating with the driver, giving directions and more!

So we played a game known as Angel/Devil Driver. Here’s how it goes:

  1. One team will try to be the “worst drivers ever”, doing things like making an intentional detour whilst acting blur about it, lying to passengers about traffic conditions, insisting they know the best routes & disregarding the passenger’s specifications and more!

 

  1. Another team will try to be the “best drivers ever”, doing things like trying their best to communicate and chat with the passenger, informing them of traffic conditions heard over the radio, ensuring passengers were picked up and dropped off at safe locations and more!

 

  1. The rest would simulate being Visually or Deaf and improvise, on-the-fly, ways to bring out the best of the Angel drivers, and prevent the evil-doings of the Devil Driver from happening.

 

There was even a map of Singapore and cars to simulate!

 

Some got REALLY into it:  Role-playing driver & passenger:

Some of the simplest suggestions we had:

  • For the Visually-impaired: Listen to Google Maps Navigation during the ride, to know where the car is going.
  • For the Deaf: Write down very specific instructions for the driver. This included which exit to take onto the highway, which highway to take, which exit to take, turn-by-turn instructions for the driver.. even before getting in the car! 

Not the most user-friendly solutions, but the simplest ones around until more accessible changes are made to the ride-hailing experience! But for now….

 

Ideation Part 1:

After surfacing the ample amount of problems from the “Grab with a Disability” Simulation & Angel/Devil Driver game, we sought to think of ideas to solve them!

We used a traditional ideation tool known as 6-3-5.

6 people in a group, each with a piece of paper, to draw out 3 ideas in 5 minutes. After every 5 minutes, they would pass on their papers, in a circle, so that their neighbours could add-on to their idea. And after 30 minutes, we would have 108 ideas! This is also the stage where ideas can be as crazy and outrageous. We even encourage that!

Here are some of the ideas:

 

In Session 3, we will start to make sense of these ideas and synthesize them into just a few really solid, pragmatic ideas! We will also learn the early stages of building your own mobile App!

Till then, don’t wait for the review of Session 3, come for it! 10th April, 11-2pm at the Prototyping Lab @ National Design Centre.

See you there!!

 

 

 

Currently unrated

Comments

Get social with us