Group 15.png

Hex view

Hex view

Problem:
The natural world is a great place from which to draw interesting and unique colours. Currently to capture a colour, you would take a photo of the subject and then use an eyedropper tool in a photo or design program to identify the colour. I wanted to create a way for anyone to identify colours fast and without specific software.

 

Solution:
I imagined the app would be simple in its function, with few UI elements. So I took the opportunity to fill the screen with the camera’s video feed. This would hopefully give the users the most space possible to capture their subject.

 

Early UI designs

The left most design was the first go at organising the interface elements on the screen, including a viewfinder, a colour block and the colour hex code. In the designs to the right, I made the colour block a more prominent size and displayed the colour code within the block to make way for a colour code toggle button, in case RGB is your thing.

The elements and text would be big with large tap targets that make it easy to use when the user is in a hurry or if there is little time to capture the colour in front of them.

 

Then I started prototyping some of the interactions in Framer. Prototypes are essential for validating designs - something that is very hard to do when working with static screens. It's important to remember though, while prototyping, not to get carried away getting the product to actually work - it just needs to look like its working, enough to test your assumptions and hypothesis.

 

Colour code toggle prototype

For the colour code toggle, I wanted it to update in a subtle way that didn't distract from the subject in frame. In the end I had the colour block move off screen just enough to switch the colour information and then return to its initial position, with a slight spring effect. It doesn’t feel like the interface has changed, rather that it is showing you an alternative view.

 

Colour block and Save screen prototype

To prototype the colour block changing, I used a video in the background and a timer to trigger the colour change at a specific point in the video - when the viewfinder moves from the ground to the sky. I think this gets across the goal of the app much faster than static screens could. It also helped me consider that upon capturing a colour I would probably want to pause the video feed, which directly inspired the blurred background of the Save screen.

 

In a step to further validate the design, as well as check that the objective could actually be realised in code, I made a prototype in Swift using Xcode. It was indeed possible to take a photo from a live video feed and get the red, green and blue colour values from a pixel in the photo.

Screenshots from the iPhone prototype

Result:
To conclude, I was happy with the design of the app. I would however like to do some further research into what other peoples current process is for capturing colours from the natural world.

I would need to also think about what efficient ways to make the colours available to the user after they identify it. Whether it gets saved to a user profile in a way that it’s then accessible from their computers or design programs.

If I was to continue this project further, I know I’d add a better signifier to the colour block to make it clearer that tapping it captures the colour. I think a copy-to-clipboard feature might nicely hand over the decision of what to do with the identified colour to the user.