Web Agency Randolph Massachusetts

Randolph website designers

web design resume

Hello and welcome to this website designers Web Designer Randolph video tutorial.

I’m Owen Corso from Google.

web design courses web design basics

And today, we’re going to build a rich media expandable creative with video.

Let’s start by selecting file, New File.

This opens a dialog box where we will set up our ad.

First, let’s make out high of project.

We have four options– The default is Display & Video 360so we will leave that as is.

web design trends 2018

Material Design Components for web - Designer vs. Developer #22

Backstory

I am a product designer at Google, and I joined the company through Sparrow, a French startup that got acquired on July 20, 2012. Since then, I worked with the Gmail team to build from scratch a flagship product that became Inbox by Gmail. It shipped on October 22, 2014.

I designed productive applications for a few years, and I felt like I reached a tipping point. I wanted to expand my skill set, learn new things every day and get better at something I’ve never touched. I needed new challenges to reboot myself by leaving my comfort zone.

I got interested in virtual reality around the Oculus Kickstarter period because of the immersiveness and endless possibilities that came with it. There is nothing more exciting than building for a new medium and exploring an uncharted territory.

I joined the Google Cardboard and Virtual Reality team on April 17, 2015. Thanks to Clay Bavor and Jon Wiley for this great opportunity.

Another dimension

My first weeks in the team were as scary as it can get. People used words I had no idea of and asked me questions I didn’t know how to answer.

I am not going to lie, ramping up on the jargon was not easy but I was expecting that. Virtual reality is a deep field (pun intended) grouping together a variety of job titles each requiring a very specialized skill set. The first weeks were intense and day after day I had a better vision of the big picture. Slowly, the pieces came together. I found out which roles would be the best fit, what I wanted to do and what was required to get there. Regardless of the mission, I knew I would have to learn a lot, but I was prepared for this challenge. My feelings varied from one day to another. From super excited to create and learn something new, to super scared because of the colossal knowledge I still had to learn. Working with smart and knowledgeable people around me reinforced these mixed feelings.

Everything is going to be alright

I told myself and firmly believed that the dots would connect eventually. I am a passionate person, and I knew that I didn’t mind spending hours learning and experimenting.

During my product designer career, I got better at understanding, identifying and resolving user problems. Making things easy to use and delight users is not that different, no matter the medium.

The core of the mission is the same, but to get you from point A to B there are some interesting things to know.

  • Sketching, is, again, at the core of everything. During any brain dump or design phase, sketching is as fast as it can get. I’ve sketched more in the time I’ve joined this team than I have in my entire career.
  • Any design skills as diverse as they are will be a huge benefit.
  • Photography knowledge will help you because you will interact with concepts such as field of view, depth of field, caustics, exposure and so on. Being able to use light to your advantage has been much valuable to me already.
  • The more you know 3D and tools, the less you will have to learn. It’s pretty obvious but be aware that at some point, you might do architecture, character, props modeling, rigging, UV mapping, texturing, dynamics, particles and so on.
  • Motion design is important. As designers, we know how to work with devices with physical boundaries. VR has none, so it’s a different way of thinking. “How does this element appear and disappear?” will be a redundant question.
  • Python, C#, C++ or any previous coding skills will help you ramp up faster. Prototyping has a big place because of the fundamental need of iterating. This area is so new that you might be one of the first to design a unique kind of interaction. Any recent game engine such as Unity or Unreal engine largely integrates code. There is a large active community in game and VR development with a huge amount of training and resources already.
  • Be prepared to be scared and get ready to embrace the unknown. It’s a new world that evolves every day. Even the biggest industry-leading companies are still trying to figure things out. That’s how it is.

Roles

Design teams will evolve because this new medium opens a lot of possibilities for creation. Think about the video game or the film industry for instance.

I think there will be two big design buckets.

The first one will be about the core user experience, interface, and interaction design. This is very close to how product design team are structured today (Visual, UI, UX, motion designers, researchers, and prototypers).

Each role will have to adapt to the rules of this new medium and keep a tight relationship with engineers. The goal will always remain the same; create a fast iteration cycle to explore a wide range of interactive designs.

On the other hand, content teams will replicate indie and game design studio structure to create everything from unique experiences to AAA games. The entertainment industry as we know it in other mediums will likely be very similar in VR.

Ultimately, both will have a close relationship to create a premium end to end experience. Both industries have a great opportunity to learn from each other.

To wrap up on my personal experience, I think being a product designer in VR is not that different but requires a lot of dedication to understand and learn a vast field of knowledge.

First step and fundamentals of VR design

First step

In this second part of the article, I will try to cover the basics you need to know regarding this medium. It’s meant to be designer oriented and simplified as much as possible.

Let’s get (a little bit) technical

The new dimension and immersiveness is a game changer. There is a set of intrinsic rules you need to know to be able to respect physiologically and treat your users carefully. We regrouped some of these principles in an app so you can learn through this great immersive experience.

Download Cardboard Design Lab

You can watch Alex’s presentation at I/O this year which goes more in-depth. The following is a small summary.

If you have to remember just two rules:

  • Do not drop frames.
  • Maintain head tracking.

People instinctively react to external events you might not be aware of, and you should be designing accordingly.

Physiological comfort. It regroups notions like motion sickness. Be careful when using acceleration and deceleration. Maintain a stable horizon line to avoid the “sea sickness” effect.

Environment comfort. People can experience various discomforts in certain situation like heights, small spaces (claustrophobia), big spaces (agoraphobia) and so on. Be careful with the scale and colliding objects. For example, if someone throws an object at you, you will instinctively try to grab it, dodge or protect yourself. Use it to your advantage and not to user’s disadvantage.

You can also use user senses to help you create more immersive products and cues. You can find inspiration in the game industry. They use all sorts of tricks to guide users during their journey. Here’s a couple:

  • Audio for spatial positioning.
  • Light to show a path and help the player.

Do not hurt or over-fatigue your users. It’s a classic mistake when you start to design for this medium. As cool as it looks, Hollywood sci-fi movies fed us with interactions that goes against simple ergonomic rules and can create major discomfort over time. Minority Report gestures are not suitable for a long period of activity.

I made a very simplified illustration of XY head movement safe zones. Green is good, yellow is ok and avoid red. There are a some user studies made public (links at the bottom of the page) that will give you more in-depth information about that topic.

A simplified illustration of XY head movement safe zones.

Bad design can lead to more serious conditions.

As an example, have you heard about Text Neck? A study, published in Neuro and Spine Surgery measured varying pressures in our neck as our head moves to different positions. Moving from a neutral head position looking straight ahead to looking down increase the pressure by 440%. The muscles and ligaments get tired and sore; the nerves are stretched, and discs get compressed. All of this misbehavior can lead to serious long-term issues such as permanent nerves damages.

TLDR; Avoid extended look down interactions.

Degrees of freedom

The body has six different ways of moving in space. It can rotate and translate in XYZ.

3 Degrees of freedom (Orientation tracking)

Phone based head mounted device such as Cardboard, Gear VR are tracking the orientation via an embedded gyroscope (3DOF). Rotations on all three axes are tracked.

6 Degrees of freedom (Orientation + Position tracking)

To achieve six degrees of freedom, the sensor(s) will track positions in space (+X, -X, +Y, -Y, +Z, -Z). High-end devices like HTC Vive or Oculus Rift are 6DOF.

Tracking
Making 6DOF possible frequently involves optical tracking of infrared emitters by one or more sensors. In Oculus’s case, the tracking sensor is on a stationary camera, while in Vive’s case the tracking sensors are on the actual HMD.

Oculus and Vive lighthouses position tracking

Inputs

Depending on the system you are designing for, the input method will vary and affect your decisions. For example Google Cardboard has a single button, that’s why the interaction model is a simple gaze and tap. HTC Vive uses two, six degrees of freedom controllers and Oculus will ship with an Xbox One controller but will eventually use a 6DOF dual controller “Oculus Touch”. All of them allow you to use more advanced and immersive interaction patterns.

The good old Xbox OneOculus Touch

There are also other kinds of inputs such as hand tracking. The most famous being Leap Motion. You can mount it to your Head Mounted Display (HMD).

Leap Motion on top of a DK2

This area constantly evolves as technology catches up but as of today, hand tracking is not reliable enough to be used as the main input. The principal issues are related to hands and fingers, collisions, and subtle movements tracking.

Even though it’s very familiar, using a game controller is a disappointing experience. It physically removes some of the freedom VR is creating. In FPS, strafing and moving might usually cause some discomfort because of the accelerations.

On the other hand, the HTC Vive controllers reinforce the VR experience thanks to the 6 degrees of freedom, and Tilt Brush is a really good example. As I am writing theses lines, I haven't tried the Oculus touch but every demo I have seen looks very promising. Check out Oculus Toybox demo.

While designing user interfaces and interactions, inputs are the keystone that will drive some decisions differently depending on which method you are using. You should be familiar with all of them and aware of their limitations.

Tools

This is a big piece and might require a more in-depth article. I will focus on the most popular tools used in this industry.

Pen and paper

You just can’t beat them. It’s the first tool we use because it’s always around and does not require too many skills. It’s a proven way to express your ideas and iterate at a fast and cheap pace. Theses factors are important because, in VR, the cost of moving from wireframes to hi-fi is higher than 2D.

Sketch

I still use it every day. Because of its ease of use, it’s the perfect tool that allows me to create a lot of explorations before moving to a VR prototype. It’s also handy for its export tools and plugins that are a huge time saver. If you are not familiar with that program, I wrote articles here and there.

Cinema 4D

I don’t see C4D as a competitor of Maya. Both are great tools, and each excels in its own way. When you don’t have a 3D background, the learning curve can be very steep. I like C4D because the interface, the parametric and non-destructive approach make sense for me. It helps me create more iterations quickly. I love the MoGraph modules, and a lot of great plugins are available. The community is very active, and you can find a lot of high-quality learning materials.

Cinema 4D motion explorations

Maya

Maya is colossal in a good and a bad way. It does anything and everything a 3D artist needs. Most of the games and movies are designed with it. It’s a robust piece of software which can handle massive simulations and very heavy scenes with ease. From rendering, modeling, animation, rigging, it’s simply the best tool out there. Maya is highly customizable, and that is one reason why it’s the industry standard. Studios need to create their own set of tools, and Maya is the perfect candidate to integrate any pipeline.

On the other hand, learning all the tools will require your full and unconditional dedication for quite some time. I mean weeks of explorations, months of learning and years of practice on a daily basis.

Unity

It’s most certainly THE prototyping tool where everything will happen. You can easily create and move things around with a direct VR preview of your project. It’s a powerful game engine with a great community and a ton of resources available in their store (the asset author determines the pricing). In the assets library, you can find simple 3D models, complete projects, audio, analytics tools, shaders, scripts, materials, textures and so on.

Their documentation and learning platform are stellar. They have a wide range of high-quality tutorials.

Unity3d uses mainly C# or JavaScript and comes with Microsoft Visual Studio but doesn't come with a built-in visual editor even though, you can find good ones in the assets store.

It support all major HMD and is the best to build for cross-platform: Windows PC, Mac OS X, Linux, Web Player, WebGL, VR(including Hololens), SteamOS, iOS, Android, Windows Phone 8, Tizen, Android TV and Samsung SMART TV, as well as Xbox One & 360, PS4, Playstation Vita, and Wii U

It supports all major 3D formats and has the best in 2D game creation. The in-app 3D editor is weak, but people have built great plugins to correct that. The software is licensed based, but you can also use the free version to a certain extent. You can check the details on their pricing page. It’s the most popular game engine out there with ~47% of market share.

Unreal Engine

The direct competitor of Unity3D. Unreal also has great documentation and videos tutorials. Their store is smaller because it’s much newer.

One of the big advantages over the competition are the graphic capabilities; Unreal is one step ahead in nearly every area: terrain, particles, post processing effects, shadows & lighting, and shaders. Everything looks amazing.

Unreal Engine 4 uses C++ and comes with Blueprint, a visual script editor.

I haven’t worked with it too much yet, so I can’t elaborate more.

Less cross-platform compatibility: Windows PC, Mac OS X, iOS, Android, VR, Linux, SteamOS, HTML5, Xbox One, and PS4.

Closing notes

Virtual reality is a very young medium. As pioneers, we still have a lot to learn and discover. That’s why I am very excited about it and why I joined this team. We have the opportunity to explore and we should, as much as we can. Understand, identify, build and iterate. Over and over.
And over again…

Resources

Community

  • Immersive design Facebook group

Videos

  • Google I/O 2015 — Designing for Virtual Reality
  • Oculus Connect keynotes
  • VR Design: Transitioning from a 2D to 3D Design Paradigm
  • VR Interface Design Pre-Visualisation Methods
  • 2014 Oculus Connect — Introduction to Audio in VR

Tutorials

  • Cinema 4D tutorials
  • Unity 3D tutorials
  • Maya and 3D tools tutorials

Articles

  • LeapMotion — VR Best Practices Guidelines
  • The fundamentals of user experience in virtual reality
  • Ready for UX in 3D?

Thanks everyone who helped me with the rereading and improvements 💖

Randolph website designers


Next, we can select the type of ad.

We want to make an expandable, so we select Expandable on the left.

Next, we can set again ad’s dimensions.

We are building a 320 by 50that expands to 480 by 250.

So I will make those changes.

We then assign the Randolph creative a name.

I will leave my Save ToLocation as the default, and leave the talk about set to Quick.

Once I’m happy with my settings, I click OK.

Google Web Designer creates the initial pages of the ad for me with the dimensions I defined.

 

website designers Randolph

The collapsed page already contains a Tap Area event to expand the ad and an expanded pageRandolph with a close tap area to collapse back down.

webex design

7 future web design trends

web design terms

Concluding with this series of tutorials, we will see now How To Solve A 4x4x4 Rubiks Cube.

The main purpose of the series, is that you learn in a much more effective way how to solve the Rubik's cubes.

We have seen that the resolution of the Junior Cube it's a subset of the steps for the resolution of Standard Cube.

We will see now that in the case of 4x4 Rubik's Cube (and bigger cubes), the method of resolution of the Standard Cube is the base of resolution of more complex cubes.

A way to solve more complex Rubik's Cubes is accomplished through using what is commonly called the 3x3x3 reduction method.

In this method it is necessary that you know how to solve the Standard Cube. If you need to learn how to solve the Standard Cube, please read 'How To Solve A 3x3x3 Rubiks Cube'.

Note:

For simplicity this tutorial is divided in four pages, in this first page terms are defined and the method is described.

Table Of Contents

• How to solve a 4x4x4 Rubiks Cube • Pieces and Faces • Aditional Faces • Turn Of An Internal Face • Description Of The Algorithm • Step 1, Solving The Centres • Step 2, Pairing up the Edges • Step 3, Finishing the Cube • The Color Scheme • Swapping Two Opposite Centres • Solve A 4x4x4 Rubiks Cube • Step 1, Solving The Centres • I] First White Row • II] First Yellow Centre • III] Finishing the White Centre • IV] Concluding The Centres • Step 2, Pairing up the Edges • Pairing, Case A • Pairing, Case B • Step 3, Finishing the Cube • Last Layer Edges Parity Error • Incomplete Line • Incomplete Cross • Top Layer Edges Parity Error • Opposite Dedges • Adjacent Dedges • Top Layer Corners Parity Error • Corners In Line • Corners In Diagonal

How To Solve A 4x4x4 Rubiks Cube

In order to understand How To Solve A 4x4x4 Rubiks Cube, you need to be familiar with the notation. If you don't know it, please read 'How to solve a Rubiks Cube' before continuing.

For the purposes of the following tutorial, a series of colors will be chosen for the faces, you can choose others.

Pieces and Faces

  • Corner ..- a physical corner piece. A corner piece has three sides. There are eight corners.
  • Edge .....- a physical edge piece. An edge piece has two sides. There are twenty four edges.
  • Centre ...- a physical centre piece. A centre piece has one side. There are twenty four centres.
  • Face .....- a side of the cube. There are six external faces and six internal faces.

Aditional Faces

A 4x4x4 Rubiks Cube has internal faces, they are named with a lowercase letter.

  • Internal Upper Face - u
  • Internal Down Face - d
  • Internal Left Face - l
  • Internal Right Face - r
  • Internal Front Face - f
  • Internal Back Face - b

Turn Of An Internal Face

In a 4x4x4 Rubiks Cube, the internal faces can turn.

To facilitate the turn (and the notation) of an internal face, this is rotated together with the outer face.

See the difference in the following examples of a clockwise turn of the External and the Internal Upper Face (also note the double arrow, which denotes to turn two faces).

How To Solve A 4x4x4 Rubiks Cube - Description Of The Algorithm

The algorithm is divided in three steps.

Step 1, Solving The Centres

The first step in the solution is to solve the 4 Centre Pieces on each face of the cube.

Step 2, Pairing up the Edges

The next step is to Pair up the 24 Edges into 12 distinct Double Edge Pairs (Dedges)

Step 3, Finishing the Cube

When you have solved the Centres and Paired up the Edges, you should see your 4x4x4 Rubik Cube like a 3x3x3 Rubik Cube.

You can finish off the cube in the same way as a 3x3x3.

The Color Scheme

The 4x4x4 Rubiks Cube is an even cube and has no fixed Centre pieces to refer to.

There is no quick way to determine which color goes where in relation to the others. It is helpful to have a color scheme memorised:

Standard Color Scheme

  • Yellow opposite White
  • Blue opposite Green
  • Red opposite Orange

If your cube is scrambled (or it doesn't have the standard color scheme), there is an easy way to determine the scheme.

Simply solve the corners of your 4x4x4 (assuming that you can solve the Corners of a 3x3x3).

Once you've figured out your colour scheme, memorize it or write it down.

Swapping Two Opposite Centres

At some point in your 4x4x4 Rubik Cube solving it is possible that you make a mistake with your Centres, such as transposing two Opposite Centres.

There is an easy way to fix it.

How To Solve A 4x4x4 Rubiks Cube - Algorithm

Now that you understood the method, it is time to put in practice.

Begin with the first step: Solving The Centres.

Make money writing about your passions. Join HubPages

________________________________________________________________ Acknowledgement : Table Of Contents by Darkside ________________________________________________________________

How to Be Comfortable in the Dentist's Chair

OWEN CORSO: Hello and welcometo this Google Web Designer video tutorial.

I'm Owen Corso from Google.

And today, we're goingto build a rich media expandable creative with video.

Let's start by selectingFile, New File.

This opens a dialog boxwhere we will set up our ad.

First, let's chooseour environment.

We have four options-- The default is Display & Video 360so we will leave that as is.

Next, we can selectthe type of ad.

We want to make anexpandable, so we select Expandable on the left.

Next, we can set upour ad's dimensions.

We are building a 320 by 50that expands to 480 by 250.

So I will make those changes.

We then assign thecreative a name.

I will leave my Save ToLocation as the default, and leave the animationmode set to Quick.

Once I'm happy with mysettings, I click OK.

Google Web Designer creates theinitial pages of the ad for me with the dimensions I defined.

The collapsed page alreadycontains a Tap Area event to expand the ad and an expandedpage with a close tap area to collapse back down.

It also has added all theinitial code needed for the ad to talk to the ad server andcollect tracking metrics.

Those metrics are builtinto the components, and we can assign uniqueidentifiers to each component as we go.

So now I can start adding thegraphic elements I've already prepared.

I drag a backgroundimage or initial ad state and drop it onto the stage,then align it to the stage, and layer it behind the taparea by sending to back.

Now, let's switch toour expanded page.

Let's add a background imageby dragging my image file to the stage.

I can also add abutton to the stage by dragging theTap Area component.

Let's make a backgroundexit tap area.

I will size, align it, and thenI will give it a unique name.

To add functionalityto the button, I will add an event using theplus button in the event's toolbar.

This brings me tomy Actions panel, where we assignall of the metrics to our ad instead ofcoding them manually.

I'm going to selectthe tap area I just named BackgroundExit from the list.

Choose Tap Area, Touch/Click as the event.

Google Ad, Exit ad.

On the Receiver panel,I select gwd-ad.

Lastly, I give it an exitidentifier and a destination URL.

For more in-depth detailson the event model, check out the Eventsand Metrics video.

Next, let's add avideo component.

You drag it to the stage,then give it a name and size it properly.

Tell it how to behave.

I want it to autoplay and start muted.

And you target thevideo file here.

This component has allof the metrics built in, so you can avoid handcoding them in the ad.

OK.

Let's preview our ad.

On page load, we seeour collapsed state.

When we click, the adexpands to our expanded page.

Our video behavesas we told it to, and clicking on the backgroundexits to our landing page.

Once the ad is built andfunctioning as you want, it is ready to publish.

Go to File, Publish.

And you're presentedwith a few options-- Publish Locally,to Google Drive, and, finally, toStudio.

Let's choose Publish Locally.

This is where you cancontrol how the ad is output.

For instance, youcan add polite load to the ad, which delays thead load until after the page content loads.

You can also set itto minify the code and add browserprefixes automatically.

We'll leave all thesesettings as to the default.

Click Publish, and Web Designerwill wrap up all of your files in a nice little zipfor uploading to Studio.

Now, let's testit out in Studio.

Let's make a newcreative of expanding type.

Drag the zip file to uploadour creative to Studio.

Now, let's preview our creative.

As you can see, I can expandthe unit, play the video, and trigger thebackground exit we added.

You can see these eventslogging to the output console.

And that's an overviewof Studio integration features in Google Web Designer.

web design jobs online

As part of Intuit’s core initiatives to further cultivate mobile first thinking and accelerate growth into global markets, the Intuit Small Business Group’s Design Org has shifted from a model of designing and shipping prioritized features to a model where every designer is responsible for end-to-end, cross-device experiences, which includes designing for our products and services on desktop web, mobile web, desktop client apps, and native mobile apps.

As a design lead for our ecosystem of native mobile products over the past few years, I started getting a lot of questions around guidance and principles for mobile design. I noticed many of the designers, product managers, and engineers who are new to mobile app design or don’t live and breathe mobile app development on an everyday basis didn’t fully understand the nature of designing for native platforms and device capabilities. To reinforce the notion that “cross-device” and “mobile first” isn’t just about designing for smaller screens and scaling across multiple device sizes, I collaborated with the Design Systems Team to establish a set of mobile patterns and guidelines so that designers can hit the ground running, or run even faster, with mobile design. We recently published some guidelines, tools, and resources on our internal design toolkit that I thought would be great to share some key points and takeaways with a wider audience as the documentation addresses many frequently asked questions around mobile patterns.

Firstly, I want to start off by saying that what I write here is simply for guidance. Our mantra for any kind of pattern guidelines documentation we provide is, “Give me guidance, but let me drive.” We don’t want to be prescriptive, and we don’t want to tell you how to design, but this is a good starting point to get you going on native mobile designs. Why are we calling out native mobile? As we continue to design device-agnostic, end-to-end experiences and features for products and services, we must remember not to neglect the different platforms (i.e. our mobile products are currently offered on both iOS and Android).

Overall Principles

1. Respect the platform

We documented patterns and components based on native operating systems that we have apps on: iOS and Android. When designing for native platforms, you should consistently refer to the native OS design guidelines first for maximum quality. Keep in mind that native platform guidelines constantly evolve, so it’s always good practice to stay on top of these guidelines and refresh your memory and knowledge often.

Apple’s Human Interface Guidelines: https://developer.apple.com/ios/human-interface-guidelines/

Google’s Material Design Guidelines: https://material.io/guidelines/

2. Focus on the customer benefit

Always design for the customer benefit first. No use case is the same, and many use cases have exceptions. Do not design something simply because you can reuse a pattern or component for another feature. Design patterns help ground us as a system and unify an experience across an ecosystem of products, but they should by no means be the first or last stop in the design process. Always question yourself: How will this benefit the customer?

3. Think device first

Push your thinking beyond “mobile first.” Start thinking about leveraging device capabilities first. The native mobile device has a lot to offer: touch, voice, pressure, location tracking, accelerometer, notifications, etc. You are designing around the device, the platform, the user experience. How can these device features be utilized in our products? How can the mobile device benefit users beyond the screen interface in front of them?

4. Keep scalability in mind

Growing from the previous principle, do remember that a mobile device isn’t just a phone. Scalability across devices, more specifically between a phone and tablet, is a common challenge among designers. When we think of mobile devices, we know there are tablets, phones, phablets (not small enough to be a phone, not big enough to be a tablet). Some of the recurring questions I get asked are: Should there be parity between web and tablet designs? Can we translate the phone patterns to be the same on tablets? How do we design for phablets (not small enough to be a phone, not big enough to be a tablet)? To answer these questions, we researched with users, took an in-depth look at device interfaces and screen sizes, and set some standards. While the phone and tablet share many similarities, users use them very differently.

PHONE INTERFACE

Mobile interfaces LESS THAN 7 inches width should be treated as a phone. Syntax and layout should be aligned across these devices as much as possible, but we also want to leverage native platform guidelines and capabilities first and foremost.

A fundamental design principle for mobile phones is to include only necessary information. Do not overload the user with more than they need to know or take action on. The phone is a convenient way to consume information on the go. Small business owners use a phone to complete quick actions while they are not in the office, capture data, view content, then perhaps close it out and come back to take a look later.

TABLET INTERFACE

Mobile interfaces GREATER THAN 7 inches width should be treated as a tablet. Syntax and layout should be aligned across these devices as much as possible, and by no means should they need to align exactly the same as the less than 7-inch interfaces.

Tablet designs should look and feel like desktop web, but they should function like the phone (with tap/swipe/hold gestures, transitions, etc.). Many users view the tablet as a hybrid device. We’ve encountered many small business owners that don’t own a computer, but they own a tablet, and those users treat the tablet as a reliable device they can do work on.

To scale for the future or additional digital interfaces, you should also think about non-mobile touchscreens like TV displays, interactive table displays, automobile displays, laptop displays that you can touch, etc. You want to make sure you can scale for multiple screen sizes, large and small, and not limit yourself to thinking only about the devices your products are being supported by.

Patterns and Guidelines

This list is a small subset of patterns and guidelines that I’ve found designers have been commonly asking around best practices for our mobile products.

Screen Transitions

One of the major aspects that make navigating content on native mobile platforms so delightful is the transitions between screens. Two questions I get asked a lot are: When should a push (screen pushed in leftward from the right) be used? When should a modal (screen pushed upward from the bottom) be used? We’ve established the following best practices:

A push is essentially the fundamental screen transition to view a new screen that is stacked on top of the previous screen. There is typically a Back button so user can view the last viewed screen. For screens that are primarily for viewing, such as transaction detail screens or lists, we use a push.

A modal is typically used when we are requiring the user to select, edit content, or input data. All of our transaction forms use full screen modals as it requires more user thought due to several form fields on one screen. The titles bars for these screens typically have Cancel and Save or Cancel and Done actions. Then, when you tap Save, you get a push screen because you are viewing (not editing) the saved content.

Call to Actions

This section highlights a question I often get: “Should this call to action be a button or a text link?” In both iOS’ and Android’s design guidelines, text as buttons is the norm and recommendation. However, I feel when we use text, especially with a system font against a dark or light background, we lose out on a major opportunity to incorporate brand elements, such as our ecosystem green color or line iconography. So, we’ve deliberately moved away from using text as call to actions, and instead use buttons with high contrast, which also makes it very clear that it is a call to action and not just part of the screen content.

Empty States

Our empty state screens provides a first impression to users who are new to our products. It usually consists of an illustration, a brief description, and a clear call to action. A common and current design trend is the usage of gray text on a light background. If you decide to follow that trend, make sure the text is readable and accessible by analyzing the foreground and background colors to meet the WCAG 2.0 color contrast ratio requirements.

Carets

Firstly, yes, it’s spelled caret, not carat or carrot. 🙂 Carets are used to promote discoverability. Historically, we try to use carets for every instance we want to indicate that the user should tap into the row to view more. However, in our forms, we are working toward to moving away from using carets and instead utilize the extra real estate by creating visual cues and conversational content design to indicate tap targets to view more. After some user testing with different design treatments, we’ve found that discoverability isn’t as much of an issue as we thought. Users will naturally tap on rows, whether there’s more information provided to them or not. We only want to use carets when absolutely necessary.

Action Sheets

General rule of thumb for native mobile design: Use action sheets whenever there are multiple actions associated with a single call to action (that is not a system blocker). Apple iOS guidelines calls these action sheets. Google Android calls these bottom sheets. Use action sheets/bottom sheets whenever there are multiple actions associated with a single call to action.

Cards or Tiles

A card (or tile as other teams may call it) is a component acting as a rectangular container for a certain amount of information: visual elements, instructional text, diagrams, and action triggers. There are two types of cards based on appearance and usage: action card and info card.

Dialogs

We use native system dialogs for critical alerts, permissions related alerts, system blocker alerts, etc. The key word is “alert.” Note that for actions that aren’t related to these things, we try to use action sheets.

Fonts

The general rule for native mobile design is to use system fonts as much as possible. However, we needed to incorporate our brand and voice and tone to create what we call “QuickBooks Ownable Moments.” For large headlines and sub-headlines, we use our brand fonts. For body text, we use system fonts. For fonts within buttons, we use system fonts.

Toggles

Toggle switches are used to trigger a binary operation (i.e. turning something on and off). It is used often to replace a web checkbox metaphor. We have a lot of checkboxes in our web products so when we design for native mobile, we want to make sure we are only looking to replace binary checkboxes that allows for things like enabling or disabling content, show/hide content or fields, turn on/off tax, track returns for customers, instead of checkboxes used for selecting multiple items.

Again, these are just a few guidelines to get you started or to accelerate your mobile first design process, especially for native mobile. You are the driver and designer with creative license to define the end-to-end user experience for your products and services. Trust your gut, follow your instincts, but always remember to respect the platforms, focus on the customer benefit, think device first, and keep scalability in mind!

Yvonne So is a Principal UX Designer @Intuit currently crafting meaningful experiences for small businesses around the world. With a passion and mission for making technology more inclusive of everyone, she regularly speaks and writes about mobile UX, accessibility, innovation, and empathic design.