web agency Newton Massachusetts

Newton website builder

web design for beginners

Hello and welcome to this website builder Web Designer Newton video tutorial.

I’m Owen Corso from Google.

web design inspiration 2018 web design mockup

And today, we’re going to build a rich media expandable creative with video.

Let’s start by selecting file, New File.

This opens a dialog box where we will set up our ad.

First, let’s make out high of project.

We have four options– The default is Display & Video 360so we will leave that as is.

web design jobs remote

Using Autocomplete for Optimal Form UX - Designer vs. Developer #24

I’m no color expert. Far from, actually. Throughout my career, I’ve depended on visual designers better than myself to produce an engaging palette and apply it harmoniously across a UI.

Yet, as a systems designer, I’m often in the position to provoke and validate color decisions as a system takes shape. Here’s a 16 lessons I’ve learned while stabilizing a primary palette, tint and shade choices, secondary palettes, and solving for accessible contrast.

Primary Palette

By primary, we’re talking colors used everywhere including your brand colors, neutrals, and a typically interactive digital blue.

#1. Stabilize Brand Colors Quickly

︎Every organization has one, two, or no more than a few core brand colors. THE red. THE blue. THE orange. Settle on them. Even if reasonably set up with a color variable or two, nothing signals a design system team that can’t get their act together than constantly changing primary colors.

Takeaway: Decide your essential brand colors early, because they spread widely, quickly.

#2. Involve Brand (If You Alter a Brand Color)

Is brand blue a bit dull? Can’t resist the urge to liven it up? Nothing poisons early collaboration more than a casual “We saturated the brand orange for web” followed by brand reacting with “You did what?” Oh the sacrilege!

Takeaway: Brand colors are the brand team’s territory. So discuss adjustments with them and defer to their judgment as needed.

#3. Drop the Neutral Neutrals

From dark-as-night charcoal to fluffy light gray, neutrals provide essential UI scaffolding. Loading a system with neutrals, even a few, risks giving teams access to muddy colors. They can also lead to “wireframey” designs. And, neither dark nor light type has sufficiently accessible contrast on a medium gray background.

Takeaway: Provide a few light grays and a few dark grays to achieve useful contrast, but don’t get wishy washy wireframey. Consider avoiding medium grays in between.

#4. Go “Digital Blue.” Everybody Else Does.

My past five design systems settled on a saturated blue as a default button and link color. Links have always been blue, perhaps since the dawn of the first browser. This “Digital” blue, a utility color for links and clickable items, is essential in any core palette.

Takeaway: When (not if 😉 ) you go with your “Digital Blue,” choose an accessible one and make sure it doesn’t clash with the brand’s own blue, or red, orange, purple, or green.

Tints & Shades Per Color

You can’t have just a few colors and call it a day, right? System users often need to tune a color choice across a range, reuse with ease, and know their boundaries.

#5. Stack the Tint & Shade Range, Per Color

Color palette display patterns long predate the web. Yet I still love me a compactly arranged tint stack. They can be just…gorgeous. The best stacks visualize more than just a color, combining its name with HEX codes, code variables, and other indicators (such as prohibiting overlaid type). A quick scan is all you need.

Takeaway: Stack available colors in each hue, and treat the stack as a visualization to include important details compactly.

Material Design’s Indigo and Deep Orange

#6. Name Tints & Shades by Brightness

We’ve all been there. A month into the system, the neutrals $color-gray-1, $color-gray-2, … , $color-gray-7 — are stable. And then, in a stroke, you’ve got another tint to add stuck between -1 and -2. That numbering system stinks.

Takeaway: Scale color names between 0 and 100 based on brightness, such as $color-gray-05 and $color-gray-92. The scale reflects a familiar range from dark to light, allows for injecting new options between, and heck if I won’t remember $color-gray- 93 until we retire it later.

#7. Limit Tint & Shade Quantity

At the core of a good system is choice without endless options, a stable aesthetic to serve as a starting point. Odds are, you aren’t Material Design, intended to serve countless products. In most cases, a design system need not offer boundless choices. The more choices you provide, the tougher it’ll be to control harmonic combinations and a consistent feel across applications.

Takeaway: Offer a handful of options and avoid tedious variety. Empower system users with just enough choice: more than a single option, but only up to a few intentional choices.

#8. Tell Me How To Transform: Hand-Pick or Functionally

Modern tools like SASS and Stylus offer transformation functions like darken and lighten to shift a color by a brightness percentage. These handy tools enable a you to alter a color for subtle contrasts like a hovered button or tiered navigation.

But transforms can be troublesome: carefully crafted base colors can become inaccessible alternatives (see below), a page’s overall palette can muddy, or a “5% system” that works on moderately bright colors yields insufficient contrast for a very light or dark case.

Takeaway: Deliberately allow — or avoid — color transformations in your system. If you endorse the practice, then offer examples of when and how to do it effectively in your system, such as 5–10% for moderately bright cases and 10–20% in more extreme cases. If transformations should be avoided , document that succinctly.

Secondary Palettes

Beyond the brand colors and their variants, well-considered color systems array the broader variety of colors reserved for varied purposes.

#9. Define Meaningful Sets Like Feedback Colors

Most systems reserve a certain red for errors, green for success, yellow for warning, and (possibly a lighter sky) blue for informational messages. Feedback color is critical, because it’s positioned at the top of the page interacting with other key components and/or encountered as a result of an unwelcome circumstance. Without system guidance, such messages become embedded in product code, the result of product teams solving a challenge quickly and moving on.

Takeaway: Explore and define the standard feedback colors and other relevant sets to ensure that colors fit harmoniously rather wedging them in later or having teammates recall “I just grabbed it from Google.”

Typical feedback colors: success, warning, error and informational

#10. Illustrate Theme Variety

In some systems, color use is customized per product, section, or brand. Often, this may be a result of relating a master brand (think, Marriott International) to its sub-brands (think Courtyard Hotels, Ritz Carlton, and Moxy Hotels). Or it’s a prefab themes like Ambient Warmth and Frozen Blue. Maybe the user is complete control, and you need to illustrate the breadth of (all the havoc of) what they can do.

Takeaway: Reveal the range of themes available compactly, and set boundaries around allowable theme colors in certain contexts.

Theme colors for multiple Marriott.com hotels, derived from product pages

#11. Define How Theming Works

It’s not enough to simply say “Go ahead and theme it!” A theme color may apply to predictable accents throughout a UI such as button background-color, active tab background-color, or a primary navigation’s thick border-top. Just as important, theme colors may be forbidden from altering other bits, such as long form type or — yikes! — a link color that ends up invisible.

Takeaway: Identify how theming works, particularly via reference to specific UI element properties in play. Just as important, articulate which — if not most — elements are off limits.

#12. Avoid Guiding on Color-Mixing Until (At Least) Dust Settles

One of my favorite all time design system tools is Google’s MDL Color Customizer, which enables users to combine primary and secondary UI colors effectively. It’s so easy, and the outcome so helpful. Yet, the system teams I work with either don’t want to provide this kind of flexibility or lack the time and care necessary to solve such a combinatoric challenge.

Takeaway: Avoid the rabbit hole of solving for a vast array of color combinations unless it’s a core system value. In most cases, system users will pair up their own combinations or benefit from a tool more dedicated to doing just that. Help them propagate their choice rather than solving for every combination they may consider. That experimentation is their job.

Serve users of your system by making it efficient to propagate their choice through a product, rather than making the choice for them.

Contrast & Accessibility

Solving for accessible color contrast should a core practice of setting up any digital color system from the get go. However, design can be tumultuous place, and teams can lose sometimes. Or some members don’t know about accessibility. Or they simply don’t prioritize it.

A systems team can engrain accessible practices into a workflow to provoke and spread values in accessibility broadly across an enterprise.

#13. Check Contrast Early & Ritually

It happens often: a few weeks or days before a product — or design system — launch, finally somebody notices. The design team hasn’t taken necessary care to ensure the primary and secondary color palette is being applied in a way to meet WCAG 2.0 color contrast of 3.0 (for large, heavier type) or 4.5 (for standard type). So designers — and then, their developers — scramble to determine fixes and inject it into the code.

Takeaway: Any system designer responsible for color must be familiar with WCAG 2.0 rules, have a tool (like Tanaguru) to test color pairs, and incorporate the practice into color selection.

Tanaguru, one of many accessibility calculators online

#14. Explore Accessible Color Choices Across Ranges

A drawback of WCAG guidelines is its stark threshold: a color pair passes or fails. This leaves designers yearning for more, but worse leaves stakeholders flummoxed at how bad the color pair fails and how much it needs to change.

Conversation quickens when we reveal a spectrum of choices, with the pass/fail line fairly evident. This transforms the process from trial and error to tuning a dial. Before, it was “That pair failed. Let’s try again.” Now, it’s an enlightening “Oh, so that’s how dark the blue needs to be” followed by a rational discussion to balance visual tone, brand identity, and accessibility sensitivities.

Takeaway: When exploring accessible color contrast, show a range of choices to help a team select a color that passes the test.

Exploring neutral and interactive colors by showing multiple choices across a range

#15. Solve the Reverse Light on Dark and Dark on Light

When creating a system, it’s up to the systems designer to be mindful of and solve for the entire range of choices on offer. It’s not enough to just test for accessibility problems as they arise. Instead, a color palette should be thoroughly reviewed prior to publishing a system for reuse.

This is especially true for reverse color treatments. It’s very common for a system to default to dark text on a light background. However, most find themselves reversing color treatments, whether a black and white on light and dark neutrals or tints of another primary or secondary color.

Takeaway:Solve for and recommend reversed pairings to adopt or avoid.

A table of calculated contrast (using a SASS function) between neutral backgrounds and interactive blue alternatives

#16. Use Color to Provoke Broader Accessibility Awareness

Color is fundamental to a system, and accessible color contrast is fundamental to color. This injects accessibility smack dab into the middle of a system’s formation. People that matter are paying attention: brand managers, design leads, developers, and execs. What a wonderful opportunity to use color to open a door to the broader array of accessibility considerations.

Takeaway: Seize the opportunity to advocate for accessibility. Always be probing a collaborator’s knowledge of accessibility (or lack thereof) and educate and advocate all you can.

Newton website builder

Next, we can select the type of ad.

We want to make an expandable, so we select Expandable on the left.

Next, we can set again ad’s dimensions.

We are building a 320 by 50that expands to 480 by 250.

So I will make those changes.

We then assign the Newton creative a name.

I will leave my Save ToLocation as the default, and leave the talk about set to Quick.

Once I’m happy with my settings, I click OK.

Google Web Designer creates the initial pages of the ad for me with the dimensions I defined.

 

website builder Newton

The collapsed page already contains a Tap Area event to expand the ad and an expanded pageNewton with a close tap area to collapse back down.

webex design

Latest Dental Implants To Be Used

web design iphone x

Backstory

I am a product designer at Google, and I joined the company through Sparrow, a French startup that got acquired on July 20, 2012. Since then, I worked with the Gmail team to build from scratch a flagship product that became Inbox by Gmail. It shipped on October 22, 2014.

I designed productive applications for a few years, and I felt like I reached a tipping point. I wanted to expand my skill set, learn new things every day and get better at something I’ve never touched. I needed new challenges to reboot myself by leaving my comfort zone.

I got interested in virtual reality around the Oculus Kickstarter period because of the immersiveness and endless possibilities that came with it. There is nothing more exciting than building for a new medium and exploring an uncharted territory.

I joined the Google Cardboard and Virtual Reality team on April 17, 2015. Thanks to Clay Bavor and Jon Wiley for this great opportunity.

Another dimension

My first weeks in the team were as scary as it can get. People used words I had no idea of and asked me questions I didn’t know how to answer.

I am not going to lie, ramping up on the jargon was not easy but I was expecting that. Virtual reality is a deep field (pun intended) grouping together a variety of job titles each requiring a very specialized skill set. The first weeks were intense and day after day I had a better vision of the big picture. Slowly, the pieces came together. I found out which roles would be the best fit, what I wanted to do and what was required to get there. Regardless of the mission, I knew I would have to learn a lot, but I was prepared for this challenge. My feelings varied from one day to another. From super excited to create and learn something new, to super scared because of the colossal knowledge I still had to learn. Working with smart and knowledgeable people around me reinforced these mixed feelings.

Everything is going to be alright

I told myself and firmly believed that the dots would connect eventually. I am a passionate person, and I knew that I didn’t mind spending hours learning and experimenting.

During my product designer career, I got better at understanding, identifying and resolving user problems. Making things easy to use and delight users is not that different, no matter the medium.

The core of the mission is the same, but to get you from point A to B there are some interesting things to know.

  • Sketching, is, again, at the core of everything. During any brain dump or design phase, sketching is as fast as it can get. I’ve sketched more in the time I’ve joined this team than I have in my entire career.
  • Any design skills as diverse as they are will be a huge benefit.
  • Photography knowledge will help you because you will interact with concepts such as field of view, depth of field, caustics, exposure and so on. Being able to use light to your advantage has been much valuable to me already.
  • The more you know 3D and tools, the less you will have to learn. It’s pretty obvious but be aware that at some point, you might do architecture, character, props modeling, rigging, UV mapping, texturing, dynamics, particles and so on.
  • Motion design is important. As designers, we know how to work with devices with physical boundaries. VR has none, so it’s a different way of thinking. “How does this element appear and disappear?” will be a redundant question.
  • Python, C#, C++ or any previous coding skills will help you ramp up faster. Prototyping has a big place because of the fundamental need of iterating. This area is so new that you might be one of the first to design a unique kind of interaction. Any recent game engine such as Unity or Unreal engine largely integrates code. There is a large active community in game and VR development with a huge amount of training and resources already.
  • Be prepared to be scared and get ready to embrace the unknown. It’s a new world that evolves every day. Even the biggest industry-leading companies are still trying to figure things out. That’s how it is.

Roles

Design teams will evolve because this new medium opens a lot of possibilities for creation. Think about the video game or the film industry for instance.

I think there will be two big design buckets.

The first one will be about the core user experience, interface, and interaction design. This is very close to how product design team are structured today (Visual, UI, UX, motion designers, researchers, and prototypers).

Each role will have to adapt to the rules of this new medium and keep a tight relationship with engineers. The goal will always remain the same; create a fast iteration cycle to explore a wide range of interactive designs.

On the other hand, content teams will replicate indie and game design studio structure to create everything from unique experiences to AAA games. The entertainment industry as we know it in other mediums will likely be very similar in VR.

Ultimately, both will have a close relationship to create a premium end to end experience. Both industries have a great opportunity to learn from each other.

To wrap up on my personal experience, I think being a product designer in VR is not that different but requires a lot of dedication to understand and learn a vast field of knowledge.

First step and fundamentals of VR design

First step

In this second part of the article, I will try to cover the basics you need to know regarding this medium. It’s meant to be designer oriented and simplified as much as possible.

Let’s get (a little bit) technical

The new dimension and immersiveness is a game changer. There is a set of intrinsic rules you need to know to be able to respect physiologically and treat your users carefully. We regrouped some of these principles in an app so you can learn through this great immersive experience.

Download Cardboard Design Lab

You can watch Alex’s presentation at I/O this year which goes more in-depth. The following is a small summary.

If you have to remember just two rules:

  • Do not drop frames.
  • Maintain head tracking.

People instinctively react to external events you might not be aware of, and you should be designing accordingly.

Physiological comfort. It regroups notions like motion sickness. Be careful when using acceleration and deceleration. Maintain a stable horizon line to avoid the “sea sickness” effect.

Environment comfort. People can experience various discomforts in certain situation like heights, small spaces (claustrophobia), big spaces (agoraphobia) and so on. Be careful with the scale and colliding objects. For example, if someone throws an object at you, you will instinctively try to grab it, dodge or protect yourself. Use it to your advantage and not to user’s disadvantage.

You can also use user senses to help you create more immersive products and cues. You can find inspiration in the game industry. They use all sorts of tricks to guide users during their journey. Here’s a couple:

  • Audio for spatial positioning.
  • Light to show a path and help the player.

Do not hurt or over-fatigue your users. It’s a classic mistake when you start to design for this medium. As cool as it looks, Hollywood sci-fi movies fed us with interactions that goes against simple ergonomic rules and can create major discomfort over time. Minority Report gestures are not suitable for a long period of activity.

I made a very simplified illustration of XY head movement safe zones. Green is good, yellow is ok and avoid red. There are a some user studies made public (links at the bottom of the page) that will give you more in-depth information about that topic.

A simplified illustration of XY head movement safe zones.

Bad design can lead to more serious conditions.

As an example, have you heard about Text Neck? A study, published in Neuro and Spine Surgery measured varying pressures in our neck as our head moves to different positions. Moving from a neutral head position looking straight ahead to looking down increase the pressure by 440%. The muscles and ligaments get tired and sore; the nerves are stretched, and discs get compressed. All of this misbehavior can lead to serious long-term issues such as permanent nerves damages.

TLDR; Avoid extended look down interactions.

Degrees of freedom

The body has six different ways of moving in space. It can rotate and translate in XYZ.

3 Degrees of freedom (Orientation tracking)

Phone based head mounted device such as Cardboard, Gear VR are tracking the orientation via an embedded gyroscope (3DOF). Rotations on all three axes are tracked.

6 Degrees of freedom (Orientation + Position tracking)

To achieve six degrees of freedom, the sensor(s) will track positions in space (+X, -X, +Y, -Y, +Z, -Z). High-end devices like HTC Vive or Oculus Rift are 6DOF.

Tracking
Making 6DOF possible frequently involves optical tracking of infrared emitters by one or more sensors. In Oculus’s case, the tracking sensor is on a stationary camera, while in Vive’s case the tracking sensors are on the actual HMD.

Oculus and Vive lighthouses position tracking

Inputs

Depending on the system you are designing for, the input method will vary and affect your decisions. For example Google Cardboard has a single button, that’s why the interaction model is a simple gaze and tap. HTC Vive uses two, six degrees of freedom controllers and Oculus will ship with an Xbox One controller but will eventually use a 6DOF dual controller “Oculus Touch”. All of them allow you to use more advanced and immersive interaction patterns.

The good old Xbox OneOculus Touch

There are also other kinds of inputs such as hand tracking. The most famous being Leap Motion. You can mount it to your Head Mounted Display (HMD).

Leap Motion on top of a DK2

This area constantly evolves as technology catches up but as of today, hand tracking is not reliable enough to be used as the main input. The principal issues are related to hands and fingers, collisions, and subtle movements tracking.

Even though it’s very familiar, using a game controller is a disappointing experience. It physically removes some of the freedom VR is creating. In FPS, strafing and moving might usually cause some discomfort because of the accelerations.

On the other hand, the HTC Vive controllers reinforce the VR experience thanks to the 6 degrees of freedom, and Tilt Brush is a really good example. As I am writing theses lines, I haven't tried the Oculus touch but every demo I have seen looks very promising. Check out Oculus Toybox demo.

While designing user interfaces and interactions, inputs are the keystone that will drive some decisions differently depending on which method you are using. You should be familiar with all of them and aware of their limitations.

Tools

This is a big piece and might require a more in-depth article. I will focus on the most popular tools used in this industry.

Pen and paper

You just can’t beat them. It’s the first tool we use because it’s always around and does not require too many skills. It’s a proven way to express your ideas and iterate at a fast and cheap pace. Theses factors are important because, in VR, the cost of moving from wireframes to hi-fi is higher than 2D.

Sketch

I still use it every day. Because of its ease of use, it’s the perfect tool that allows me to create a lot of explorations before moving to a VR prototype. It’s also handy for its export tools and plugins that are a huge time saver. If you are not familiar with that program, I wrote articles here and there.

Cinema 4D

I don’t see C4D as a competitor of Maya. Both are great tools, and each excels in its own way. When you don’t have a 3D background, the learning curve can be very steep. I like C4D because the interface, the parametric and non-destructive approach make sense for me. It helps me create more iterations quickly. I love the MoGraph modules, and a lot of great plugins are available. The community is very active, and you can find a lot of high-quality learning materials.

Cinema 4D motion explorations

Maya

Maya is colossal in a good and a bad way. It does anything and everything a 3D artist needs. Most of the games and movies are designed with it. It’s a robust piece of software which can handle massive simulations and very heavy scenes with ease. From rendering, modeling, animation, rigging, it’s simply the best tool out there. Maya is highly customizable, and that is one reason why it’s the industry standard. Studios need to create their own set of tools, and Maya is the perfect candidate to integrate any pipeline.

On the other hand, learning all the tools will require your full and unconditional dedication for quite some time. I mean weeks of explorations, months of learning and years of practice on a daily basis.

Unity

It’s most certainly THE prototyping tool where everything will happen. You can easily create and move things around with a direct VR preview of your project. It’s a powerful game engine with a great community and a ton of resources available in their store (the asset author determines the pricing). In the assets library, you can find simple 3D models, complete projects, audio, analytics tools, shaders, scripts, materials, textures and so on.

Their documentation and learning platform are stellar. They have a wide range of high-quality tutorials.

Unity3d uses mainly C# or JavaScript and comes with Microsoft Visual Studio but doesn't come with a built-in visual editor even though, you can find good ones in the assets store.

It support all major HMD and is the best to build for cross-platform: Windows PC, Mac OS X, Linux, Web Player, WebGL, VR(including Hololens), SteamOS, iOS, Android, Windows Phone 8, Tizen, Android TV and Samsung SMART TV, as well as Xbox One & 360, PS4, Playstation Vita, and Wii U

It supports all major 3D formats and has the best in 2D game creation. The in-app 3D editor is weak, but people have built great plugins to correct that. The software is licensed based, but you can also use the free version to a certain extent. You can check the details on their pricing page. It’s the most popular game engine out there with ~47% of market share.

Unreal Engine

The direct competitor of Unity3D. Unreal also has great documentation and videos tutorials. Their store is smaller because it’s much newer.

One of the big advantages over the competition are the graphic capabilities; Unreal is one step ahead in nearly every area: terrain, particles, post processing effects, shadows & lighting, and shaders. Everything looks amazing.

Unreal Engine 4 uses C++ and comes with Blueprint, a visual script editor.

I haven’t worked with it too much yet, so I can’t elaborate more.

Less cross-platform compatibility: Windows PC, Mac OS X, iOS, Android, VR, Linux, SteamOS, HTML5, Xbox One, and PS4.

Closing notes

Virtual reality is a very young medium. As pioneers, we still have a lot to learn and discover. That’s why I am very excited about it and why I joined this team. We have the opportunity to explore and we should, as much as we can. Understand, identify, build and iterate. Over and over.
And over again…

Resources

Community

  • Immersive design Facebook group

Videos

  • Google I/O 2015 — Designing for Virtual Reality
  • Oculus Connect keynotes
  • VR Design: Transitioning from a 2D to 3D Design Paradigm
  • VR Interface Design Pre-Visualisation Methods
  • 2014 Oculus Connect — Introduction to Audio in VR

Tutorials

  • Cinema 4D tutorials
  • Unity 3D tutorials
  • Maya and 3D tools tutorials

Articles

  • LeapMotion — VR Best Practices Guidelines
  • The fundamentals of user experience in virtual reality
  • Ready for UX in 3D?

Thanks everyone who helped me with the rereading and improvements 💖

Does Going to the Dentist Make You Nervous?

Backstory

I am a product designer at Google, and I joined the company through Sparrow, a French startup that got acquired on July 20, 2012. Since then, I worked with the Gmail team to build from scratch a flagship product that became Inbox by Gmail. It shipped on October 22, 2014.

I designed productive applications for a few years, and I felt like I reached a tipping point. I wanted to expand my skill set, learn new things every day and get better at something I’ve never touched. I needed new challenges to reboot myself by leaving my comfort zone.

I got interested in virtual reality around the Oculus Kickstarter period because of the immersiveness and endless possibilities that came with it. There is nothing more exciting than building for a new medium and exploring an uncharted territory.

I joined the Google Cardboard and Virtual Reality team on April 17, 2015. Thanks to Clay Bavor and Jon Wiley for this great opportunity.

Another dimension

My first weeks in the team were as scary as it can get. People used words I had no idea of and asked me questions I didn’t know how to answer.

I am not going to lie, ramping up on the jargon was not easy but I was expecting that. Virtual reality is a deep field (pun intended) grouping together a variety of job titles each requiring a very specialized skill set. The first weeks were intense and day after day I had a better vision of the big picture. Slowly, the pieces came together. I found out which roles would be the best fit, what I wanted to do and what was required to get there. Regardless of the mission, I knew I would have to learn a lot, but I was prepared for this challenge. My feelings varied from one day to another. From super excited to create and learn something new, to super scared because of the colossal knowledge I still had to learn. Working with smart and knowledgeable people around me reinforced these mixed feelings.

Everything is going to be alright

I told myself and firmly believed that the dots would connect eventually. I am a passionate person, and I knew that I didn’t mind spending hours learning and experimenting.

During my product designer career, I got better at understanding, identifying and resolving user problems. Making things easy to use and delight users is not that different, no matter the medium.

The core of the mission is the same, but to get you from point A to B there are some interesting things to know.

  • Sketching, is, again, at the core of everything. During any brain dump or design phase, sketching is as fast as it can get. I’ve sketched more in the time I’ve joined this team than I have in my entire career.
  • Any design skills as diverse as they are will be a huge benefit.
  • Photography knowledge will help you because you will interact with concepts such as field of view, depth of field, caustics, exposure and so on. Being able to use light to your advantage has been much valuable to me already.
  • The more you know 3D and tools, the less you will have to learn. It’s pretty obvious but be aware that at some point, you might do architecture, character, props modeling, rigging, UV mapping, texturing, dynamics, particles and so on.
  • Motion design is important. As designers, we know how to work with devices with physical boundaries. VR has none, so it’s a different way of thinking. “How does this element appear and disappear?” will be a redundant question.
  • Python, C#, C++ or any previous coding skills will help you ramp up faster. Prototyping has a big place because of the fundamental need of iterating. This area is so new that you might be one of the first to design a unique kind of interaction. Any recent game engine such as Unity or Unreal engine largely integrates code. There is a large active community in game and VR development with a huge amount of training and resources already.
  • Be prepared to be scared and get ready to embrace the unknown. It’s a new world that evolves every day. Even the biggest industry-leading companies are still trying to figure things out. That’s how it is.

Roles

Design teams will evolve because this new medium opens a lot of possibilities for creation. Think about the video game or the film industry for instance.

I think there will be two big design buckets.

The first one will be about the core user experience, interface, and interaction design. This is very close to how product design team are structured today (Visual, UI, UX, motion designers, researchers, and prototypers).

Each role will have to adapt to the rules of this new medium and keep a tight relationship with engineers. The goal will always remain the same; create a fast iteration cycle to explore a wide range of interactive designs.

On the other hand, content teams will replicate indie and game design studio structure to create everything from unique experiences to AAA games. The entertainment industry as we know it in other mediums will likely be very similar in VR.

Ultimately, both will have a close relationship to create a premium end to end experience. Both industries have a great opportunity to learn from each other.

To wrap up on my personal experience, I think being a product designer in VR is not that different but requires a lot of dedication to understand and learn a vast field of knowledge.

First step and fundamentals of VR design

First step

In this second part of the article, I will try to cover the basics you need to know regarding this medium. It’s meant to be designer oriented and simplified as much as possible.

Let’s get (a little bit) technical

The new dimension and immersiveness is a game changer. There is a set of intrinsic rules you need to know to be able to respect physiologically and treat your users carefully. We regrouped some of these principles in an app so you can learn through this great immersive experience.

Download Cardboard Design Lab

You can watch Alex’s presentation at I/O this year which goes more in-depth. The following is a small summary.

If you have to remember just two rules:

  • Do not drop frames.
  • Maintain head tracking.

People instinctively react to external events you might not be aware of, and you should be designing accordingly.

Physiological comfort. It regroups notions like motion sickness. Be careful when using acceleration and deceleration. Maintain a stable horizon line to avoid the “sea sickness” effect.

Environment comfort. People can experience various discomforts in certain situation like heights, small spaces (claustrophobia), big spaces (agoraphobia) and so on. Be careful with the scale and colliding objects. For example, if someone throws an object at you, you will instinctively try to grab it, dodge or protect yourself. Use it to your advantage and not to user’s disadvantage.

You can also use user senses to help you create more immersive products and cues. You can find inspiration in the game industry. They use all sorts of tricks to guide users during their journey. Here’s a couple:

  • Audio for spatial positioning.
  • Light to show a path and help the player.

Do not hurt or over-fatigue your users. It’s a classic mistake when you start to design for this medium. As cool as it looks, Hollywood sci-fi movies fed us with interactions that goes against simple ergonomic rules and can create major discomfort over time. Minority Report gestures are not suitable for a long period of activity.

I made a very simplified illustration of XY head movement safe zones. Green is good, yellow is ok and avoid red. There are a some user studies made public (links at the bottom of the page) that will give you more in-depth information about that topic.

A simplified illustration of XY head movement safe zones.

Bad design can lead to more serious conditions.

As an example, have you heard about Text Neck? A study, published in Neuro and Spine Surgery measured varying pressures in our neck as our head moves to different positions. Moving from a neutral head position looking straight ahead to looking down increase the pressure by 440%. The muscles and ligaments get tired and sore; the nerves are stretched, and discs get compressed. All of this misbehavior can lead to serious long-term issues such as permanent nerves damages.

TLDR; Avoid extended look down interactions.

Degrees of freedom

The body has six different ways of moving in space. It can rotate and translate in XYZ.

3 Degrees of freedom (Orientation tracking)

Phone based head mounted device such as Cardboard, Gear VR are tracking the orientation via an embedded gyroscope (3DOF). Rotations on all three axes are tracked.

6 Degrees of freedom (Orientation + Position tracking)

To achieve six degrees of freedom, the sensor(s) will track positions in space (+X, -X, +Y, -Y, +Z, -Z). High-end devices like HTC Vive or Oculus Rift are 6DOF.

Tracking
Making 6DOF possible frequently involves optical tracking of infrared emitters by one or more sensors. In Oculus’s case, the tracking sensor is on a stationary camera, while in Vive’s case the tracking sensors are on the actual HMD.

Oculus and Vive lighthouses position tracking

Inputs

Depending on the system you are designing for, the input method will vary and affect your decisions. For example Google Cardboard has a single button, that’s why the interaction model is a simple gaze and tap. HTC Vive uses two, six degrees of freedom controllers and Oculus will ship with an Xbox One controller but will eventually use a 6DOF dual controller “Oculus Touch”. All of them allow you to use more advanced and immersive interaction patterns.

The good old Xbox OneOculus Touch

There are also other kinds of inputs such as hand tracking. The most famous being Leap Motion. You can mount it to your Head Mounted Display (HMD).

Leap Motion on top of a DK2

This area constantly evolves as technology catches up but as of today, hand tracking is not reliable enough to be used as the main input. The principal issues are related to hands and fingers, collisions, and subtle movements tracking.

Even though it’s very familiar, using a game controller is a disappointing experience. It physically removes some of the freedom VR is creating. In FPS, strafing and moving might usually cause some discomfort because of the accelerations.

On the other hand, the HTC Vive controllers reinforce the VR experience thanks to the 6 degrees of freedom, and Tilt Brush is a really good example. As I am writing theses lines, I haven't tried the Oculus touch but every demo I have seen looks very promising. Check out Oculus Toybox demo.

While designing user interfaces and interactions, inputs are the keystone that will drive some decisions differently depending on which method you are using. You should be familiar with all of them and aware of their limitations.

Tools

This is a big piece and might require a more in-depth article. I will focus on the most popular tools used in this industry.

Pen and paper

You just can’t beat them. It’s the first tool we use because it’s always around and does not require too many skills. It’s a proven way to express your ideas and iterate at a fast and cheap pace. Theses factors are important because, in VR, the cost of moving from wireframes to hi-fi is higher than 2D.

Sketch

I still use it every day. Because of its ease of use, it’s the perfect tool that allows me to create a lot of explorations before moving to a VR prototype. It’s also handy for its export tools and plugins that are a huge time saver. If you are not familiar with that program, I wrote articles here and there.

Cinema 4D

I don’t see C4D as a competitor of Maya. Both are great tools, and each excels in its own way. When you don’t have a 3D background, the learning curve can be very steep. I like C4D because the interface, the parametric and non-destructive approach make sense for me. It helps me create more iterations quickly. I love the MoGraph modules, and a lot of great plugins are available. The community is very active, and you can find a lot of high-quality learning materials.

Cinema 4D motion explorations

Maya

Maya is colossal in a good and a bad way. It does anything and everything a 3D artist needs. Most of the games and movies are designed with it. It’s a robust piece of software which can handle massive simulations and very heavy scenes with ease. From rendering, modeling, animation, rigging, it’s simply the best tool out there. Maya is highly customizable, and that is one reason why it’s the industry standard. Studios need to create their own set of tools, and Maya is the perfect candidate to integrate any pipeline.

On the other hand, learning all the tools will require your full and unconditional dedication for quite some time. I mean weeks of explorations, months of learning and years of practice on a daily basis.

Unity

It’s most certainly THE prototyping tool where everything will happen. You can easily create and move things around with a direct VR preview of your project. It’s a powerful game engine with a great community and a ton of resources available in their store (the asset author determines the pricing). In the assets library, you can find simple 3D models, complete projects, audio, analytics tools, shaders, scripts, materials, textures and so on.

Their documentation and learning platform are stellar. They have a wide range of high-quality tutorials.

Unity3d uses mainly C# or JavaScript and comes with Microsoft Visual Studio but doesn't come with a built-in visual editor even though, you can find good ones in the assets store.

It support all major HMD and is the best to build for cross-platform: Windows PC, Mac OS X, Linux, Web Player, WebGL, VR(including Hololens), SteamOS, iOS, Android, Windows Phone 8, Tizen, Android TV and Samsung SMART TV, as well as Xbox One & 360, PS4, Playstation Vita, and Wii U

It supports all major 3D formats and has the best in 2D game creation. The in-app 3D editor is weak, but people have built great plugins to correct that. The software is licensed based, but you can also use the free version to a certain extent. You can check the details on their pricing page. It’s the most popular game engine out there with ~47% of market share.

Unreal Engine

The direct competitor of Unity3D. Unreal also has great documentation and videos tutorials. Their store is smaller because it’s much newer.

One of the big advantages over the competition are the graphic capabilities; Unreal is one step ahead in nearly every area: terrain, particles, post processing effects, shadows & lighting, and shaders. Everything looks amazing.

Unreal Engine 4 uses C++ and comes with Blueprint, a visual script editor.

I haven’t worked with it too much yet, so I can’t elaborate more.

Less cross-platform compatibility: Windows PC, Mac OS X, iOS, Android, VR, Linux, SteamOS, HTML5, Xbox One, and PS4.

Closing notes

Virtual reality is a very young medium. As pioneers, we still have a lot to learn and discover. That’s why I am very excited about it and why I joined this team. We have the opportunity to explore and we should, as much as we can. Understand, identify, build and iterate. Over and over.
And over again…

Resources

Community

  • Immersive design Facebook group

Videos

  • Google I/O 2015 — Designing for Virtual Reality
  • Oculus Connect keynotes
  • VR Design: Transitioning from a 2D to 3D Design Paradigm
  • VR Interface Design Pre-Visualisation Methods
  • 2014 Oculus Connect — Introduction to Audio in VR

Tutorials

  • Cinema 4D tutorials
  • Unity 3D tutorials
  • Maya and 3D tools tutorials

Articles

  • LeapMotion — VR Best Practices Guidelines
  • The fundamentals of user experience in virtual reality
  • Ready for UX in 3D?

Thanks everyone who helped me with the rereading and improvements 💖

web design using python

CHRIS: Welcome! My name is Chris and I'm a designer on the Google Web Designer team Today I'll walk through a new dynamictemplate with an emphasis on text We'll cover customizations includingconfigurable panels selecting nested elements, dynamic text fitting, editing groups and a demonstration of the template when uploaded into Display &Video 360 Ad Canvas Let's get started First let's navigate to the templatelibrary You'll find the template under the thumbnail Data Driven for Display & Video 360 Notice we have three new template layouts to choose from Blank Slate, Cue Cards and Panorama but today we'll be focusing on cue cards Let's create a template using cue cards I'm going to give the file a quick name andclick Create Now before we proceed in Google Web Designer let's take a quicklook at a design schematic of cue cards So cue cards is a template that utilizeselements and assets such as a logo, a background image, a swipe gallery a swipe gallery navigation, an animated arrow icon and three dynamic text groupslabelled SlideA through SlideC You also notice a few tap areas utilized for dynamic exits OK jumping back into Google Web Designer Let's review a fewimportant panels for customizing and configuring cue cards the template In the timeline you'll notice we have a lock icon Let's click the lock icon to unlock and edit the layer Let's select componentswipe-vertical Next navigate to the Properties panel The Properties panel iswhere we can configure the elements attributes style, position and size, andalso edit the component properties You'll find this component is driventhrough the use of groups SlideA, SlideB, and SlideC Now let's move to the Library panel We'll find the individual group definitions and group contents in the Library We can right click a group nameclick Edit and edit the contents of the group Protip: to quickly inspect theelements inside this group We'll use the Outliner The Outliner is a really coolnew tool for enabling us to view nested elements inside the group versus clicking through your divisions you can rapidly find which element you would like to target and edit You'll also notice in this creative we have twodivisions: wrap-SlideA txt-wrap-SlideA These are dynamic text divisions thathave a little bit of CSS logic that helped to auto center them depending upon what type of information comes down through the feed Now let's click on txt-description-SlideA in the Outliner You'll also notice there's a T icon next tothe txt-description-SlideA This signifies that it's a text element With the text element selected We will come up to the panel at the top named Text In the text panel you'll be able to configure text fitting of dynamic text and also the styling of the text in your document We can set a maximum size andalso a minimum size and when the dynamic text is passed to the division it will display the rendered fitted text size Now let's navigate back to the root ofthe document you'll notice we have breadcrumbs in the bottom left-hand corner of the stage right above the timeline Let's click Div to jump back tothe root of our document Now two more notable panels are the Events panel and the Dynamic panel In the Events panels we have events thatare specific to the control over the animated arrow icons behavior during autoplay and also during user gesture Next to the Events panel we have theDynamic tab These are the dynamic bindings that enable this document to bebound dynamically including assets, text, styling, and click exits You'll also notice Brand Awareness ishighlighted Brand Awareness is the schema we are going to be utilizing inside of Display & Video 360 Ad Canvas click OK to exit the dialog As an added bonus I would like to demonstrate the power of this creative If I jump over to a mock from a visual designer this is technically the specthe designer would like me to build to This creative is dynamic so the textcould technically be interchanged Let's fast forward to what the creative canlook like if I built it using Google Web Designers Cue Cards template You'll notice as I refresh this page the creative auto animates The arrow tries to grab the users' attention by animating and jumping The creative also has anavigation on the right hand side where we can drive the creative Users can also use gesture to scroll through the creative upon user interaction Let's say I wanted to publish this creative and upload it into Display & Video 360Ad Canvas So you might have a question what is the Ad canvas The Ad Canvas isa visual editor you can use to build and edit creatives in real time The Ad Canvas only supports our Google Web Designer data driven templates and also custom variations So in DV360 my template is loaded in the center and on the right hand side I have a UI that is editable on-the-fly You'll notice textfitting is working Variations and iterations can be knockedout proofed and signed off in a matter of minutes now with Google Web Designer'snew data driven templates in the Ad Canvas The new dynamic workflow hasnever been easier if you would like to learn more about Ad Canvas please look in the details section of this video for a Display & Video 360 Ad Canvascomprehensive demonstration link This wraps up our video Please have funcreating new dynamic ads Thank you from the team at Google Web Designer.