web designers Plymouth Massachusetts

Plymouth web agency


web design wix

Hello and welcome to this web agency Web Designer Plymouth video tutorial.

I’m Owen Corso from Google.

web design books web design courses

And today, we’re going to build a rich media expandable creative with video.

Let’s start by selecting file, New File.

This opens a dialog box where we will set up our ad.

First, let’s make out high of project.

We have four options– The default is Display & Video 360so we will leave that as is.

web design kalispell

What Your Dentist Can Tell You About Dental Braces

There's a lot involved when it comes to building a good website.

You've got to start thinkingabout SEO from the ground up.

It can't be an afterthought.

The website needs to be safe and secure.

It needs to be fast.

It needs to look good.

It needs to be responsive so it looks good on a cellphone or a laptop or a tablet.

It's your bread andbutter of your business.

So you want to haveyour best foot forward.

You've got to have a great website.

Plymouth web agency

Next, we can select the type of ad.

We want to make an expandable, so we select Expandable on the left.

Next, we can set again ad’s dimensions.

We are building a 320 by 50that expands to 480 by 250.

So I will make those changes.

We then assign the Plymouth creative a name.

I will leave my Save ToLocation as the default, and leave the talk about set to Quick.

Once I’m happy with my settings, I click OK.

Google Web Designer creates the initial pages of the ad for me with the dimensions I defined.

 

web agency Plymouth

The collapsed page already contains a Tap Area event to expand the ad and an expanded pagePlymouth with a close tap area to collapse back down.

web design yorke peninsula

Building Expanding Creatives - Google Web Designer

web design jobs from home

Backstory

I am a product designer at Google, and I joined the company through Sparrow, a French startup that got acquired on July 20, 2012. Since then, I worked with the Gmail team to build from scratch a flagship product that became Inbox by Gmail. It shipped on October 22, 2014.

I designed productive applications for a few years, and I felt like I reached a tipping point. I wanted to expand my skill set, learn new things every day and get better at something I’ve never touched. I needed new challenges to reboot myself by leaving my comfort zone.

I got interested in virtual reality around the Oculus Kickstarter period because of the immersiveness and endless possibilities that came with it. There is nothing more exciting than building for a new medium and exploring an uncharted territory.

I joined the Google Cardboard and Virtual Reality team on April 17, 2015. Thanks to Clay Bavor and Jon Wiley for this great opportunity.

Another dimension

My first weeks in the team were as scary as it can get. People used words I had no idea of and asked me questions I didn’t know how to answer.

I am not going to lie, ramping up on the jargon was not easy but I was expecting that. Virtual reality is a deep field (pun intended) grouping together a variety of job titles each requiring a very specialized skill set. The first weeks were intense and day after day I had a better vision of the big picture. Slowly, the pieces came together. I found out which roles would be the best fit, what I wanted to do and what was required to get there. Regardless of the mission, I knew I would have to learn a lot, but I was prepared for this challenge. My feelings varied from one day to another. From super excited to create and learn something new, to super scared because of the colossal knowledge I still had to learn. Working with smart and knowledgeable people around me reinforced these mixed feelings.

Everything is going to be alright

I told myself and firmly believed that the dots would connect eventually. I am a passionate person, and I knew that I didn’t mind spending hours learning and experimenting.

During my product designer career, I got better at understanding, identifying and resolving user problems. Making things easy to use and delight users is not that different, no matter the medium.

The core of the mission is the same, but to get you from point A to B there are some interesting things to know.

  • Sketching, is, again, at the core of everything. During any brain dump or design phase, sketching is as fast as it can get. I’ve sketched more in the time I’ve joined this team than I have in my entire career.
  • Any design skills as diverse as they are will be a huge benefit.
  • Photography knowledge will help you because you will interact with concepts such as field of view, depth of field, caustics, exposure and so on. Being able to use light to your advantage has been much valuable to me already.
  • The more you know 3D and tools, the less you will have to learn. It’s pretty obvious but be aware that at some point, you might do architecture, character, props modeling, rigging, UV mapping, texturing, dynamics, particles and so on.
  • Motion design is important. As designers, we know how to work with devices with physical boundaries. VR has none, so it’s a different way of thinking. “How does this element appear and disappear?” will be a redundant question.
  • Python, C#, C++ or any previous coding skills will help you ramp up faster. Prototyping has a big place because of the fundamental need of iterating. This area is so new that you might be one of the first to design a unique kind of interaction. Any recent game engine such as Unity or Unreal engine largely integrates code. There is a large active community in game and VR development with a huge amount of training and resources already.
  • Be prepared to be scared and get ready to embrace the unknown. It’s a new world that evolves every day. Even the biggest industry-leading companies are still trying to figure things out. That’s how it is.

Roles

Design teams will evolve because this new medium opens a lot of possibilities for creation. Think about the video game or the film industry for instance.

I think there will be two big design buckets.

The first one will be about the core user experience, interface, and interaction design. This is very close to how product design team are structured today (Visual, UI, UX, motion designers, researchers, and prototypers).

Each role will have to adapt to the rules of this new medium and keep a tight relationship with engineers. The goal will always remain the same; create a fast iteration cycle to explore a wide range of interactive designs.

On the other hand, content teams will replicate indie and game design studio structure to create everything from unique experiences to AAA games. The entertainment industry as we know it in other mediums will likely be very similar in VR.

Ultimately, both will have a close relationship to create a premium end to end experience. Both industries have a great opportunity to learn from each other.

To wrap up on my personal experience, I think being a product designer in VR is not that different but requires a lot of dedication to understand and learn a vast field of knowledge.

First step and fundamentals of VR design

First step

In this second part of the article, I will try to cover the basics you need to know regarding this medium. It’s meant to be designer oriented and simplified as much as possible.

Let’s get (a little bit) technical

The new dimension and immersiveness is a game changer. There is a set of intrinsic rules you need to know to be able to respect physiologically and treat your users carefully. We regrouped some of these principles in an app so you can learn through this great immersive experience.

Download Cardboard Design Lab

You can watch Alex’s presentation at I/O this year which goes more in-depth. The following is a small summary.

If you have to remember just two rules:

  • Do not drop frames.
  • Maintain head tracking.

People instinctively react to external events you might not be aware of, and you should be designing accordingly.

Physiological comfort. It regroups notions like motion sickness. Be careful when using acceleration and deceleration. Maintain a stable horizon line to avoid the “sea sickness” effect.

Environment comfort. People can experience various discomforts in certain situation like heights, small spaces (claustrophobia), big spaces (agoraphobia) and so on. Be careful with the scale and colliding objects. For example, if someone throws an object at you, you will instinctively try to grab it, dodge or protect yourself. Use it to your advantage and not to user’s disadvantage.

You can also use user senses to help you create more immersive products and cues. You can find inspiration in the game industry. They use all sorts of tricks to guide users during their journey. Here’s a couple:

  • Audio for spatial positioning.
  • Light to show a path and help the player.

Do not hurt or over-fatigue your users. It’s a classic mistake when you start to design for this medium. As cool as it looks, Hollywood sci-fi movies fed us with interactions that goes against simple ergonomic rules and can create major discomfort over time. Minority Report gestures are not suitable for a long period of activity.

I made a very simplified illustration of XY head movement safe zones. Green is good, yellow is ok and avoid red. There are a some user studies made public (links at the bottom of the page) that will give you more in-depth information about that topic.

A simplified illustration of XY head movement safe zones.

Bad design can lead to more serious conditions.

As an example, have you heard about Text Neck? A study, published in Neuro and Spine Surgery measured varying pressures in our neck as our head moves to different positions. Moving from a neutral head position looking straight ahead to looking down increase the pressure by 440%. The muscles and ligaments get tired and sore; the nerves are stretched, and discs get compressed. All of this misbehavior can lead to serious long-term issues such as permanent nerves damages.

TLDR; Avoid extended look down interactions.

Degrees of freedom

The body has six different ways of moving in space. It can rotate and translate in XYZ.

3 Degrees of freedom (Orientation tracking)

Phone based head mounted device such as Cardboard, Gear VR are tracking the orientation via an embedded gyroscope (3DOF). Rotations on all three axes are tracked.

6 Degrees of freedom (Orientation + Position tracking)

To achieve six degrees of freedom, the sensor(s) will track positions in space (+X, -X, +Y, -Y, +Z, -Z). High-end devices like HTC Vive or Oculus Rift are 6DOF.

Tracking
Making 6DOF possible frequently involves optical tracking of infrared emitters by one or more sensors. In Oculus’s case, the tracking sensor is on a stationary camera, while in Vive’s case the tracking sensors are on the actual HMD.

Oculus and Vive lighthouses position tracking

Inputs

Depending on the system you are designing for, the input method will vary and affect your decisions. For example Google Cardboard has a single button, that’s why the interaction model is a simple gaze and tap. HTC Vive uses two, six degrees of freedom controllers and Oculus will ship with an Xbox One controller but will eventually use a 6DOF dual controller “Oculus Touch”. All of them allow you to use more advanced and immersive interaction patterns.

The good old Xbox OneOculus Touch

There are also other kinds of inputs such as hand tracking. The most famous being Leap Motion. You can mount it to your Head Mounted Display (HMD).

Leap Motion on top of a DK2

This area constantly evolves as technology catches up but as of today, hand tracking is not reliable enough to be used as the main input. The principal issues are related to hands and fingers, collisions, and subtle movements tracking.

Even though it’s very familiar, using a game controller is a disappointing experience. It physically removes some of the freedom VR is creating. In FPS, strafing and moving might usually cause some discomfort because of the accelerations.

On the other hand, the HTC Vive controllers reinforce the VR experience thanks to the 6 degrees of freedom, and Tilt Brush is a really good example. As I am writing theses lines, I haven't tried the Oculus touch but every demo I have seen looks very promising. Check out Oculus Toybox demo.

While designing user interfaces and interactions, inputs are the keystone that will drive some decisions differently depending on which method you are using. You should be familiar with all of them and aware of their limitations.

Tools

This is a big piece and might require a more in-depth article. I will focus on the most popular tools used in this industry.

Pen and paper

You just can’t beat them. It’s the first tool we use because it’s always around and does not require too many skills. It’s a proven way to express your ideas and iterate at a fast and cheap pace. Theses factors are important because, in VR, the cost of moving from wireframes to hi-fi is higher than 2D.

Sketch

I still use it every day. Because of its ease of use, it’s the perfect tool that allows me to create a lot of explorations before moving to a VR prototype. It’s also handy for its export tools and plugins that are a huge time saver. If you are not familiar with that program, I wrote articles here and there.

Cinema 4D

I don’t see C4D as a competitor of Maya. Both are great tools, and each excels in its own way. When you don’t have a 3D background, the learning curve can be very steep. I like C4D because the interface, the parametric and non-destructive approach make sense for me. It helps me create more iterations quickly. I love the MoGraph modules, and a lot of great plugins are available. The community is very active, and you can find a lot of high-quality learning materials.

Cinema 4D motion explorations

Maya

Maya is colossal in a good and a bad way. It does anything and everything a 3D artist needs. Most of the games and movies are designed with it. It’s a robust piece of software which can handle massive simulations and very heavy scenes with ease. From rendering, modeling, animation, rigging, it’s simply the best tool out there. Maya is highly customizable, and that is one reason why it’s the industry standard. Studios need to create their own set of tools, and Maya is the perfect candidate to integrate any pipeline.

On the other hand, learning all the tools will require your full and unconditional dedication for quite some time. I mean weeks of explorations, months of learning and years of practice on a daily basis.

Unity

It’s most certainly THE prototyping tool where everything will happen. You can easily create and move things around with a direct VR preview of your project. It’s a powerful game engine with a great community and a ton of resources available in their store (the asset author determines the pricing). In the assets library, you can find simple 3D models, complete projects, audio, analytics tools, shaders, scripts, materials, textures and so on.

Their documentation and learning platform are stellar. They have a wide range of high-quality tutorials.

Unity3d uses mainly C# or JavaScript and comes with Microsoft Visual Studio but doesn't come with a built-in visual editor even though, you can find good ones in the assets store.

It support all major HMD and is the best to build for cross-platform: Windows PC, Mac OS X, Linux, Web Player, WebGL, VR(including Hololens), SteamOS, iOS, Android, Windows Phone 8, Tizen, Android TV and Samsung SMART TV, as well as Xbox One & 360, PS4, Playstation Vita, and Wii U

It supports all major 3D formats and has the best in 2D game creation. The in-app 3D editor is weak, but people have built great plugins to correct that. The software is licensed based, but you can also use the free version to a certain extent. You can check the details on their pricing page. It’s the most popular game engine out there with ~47% of market share.

Unreal Engine

The direct competitor of Unity3D. Unreal also has great documentation and videos tutorials. Their store is smaller because it’s much newer.

One of the big advantages over the competition are the graphic capabilities; Unreal is one step ahead in nearly every area: terrain, particles, post processing effects, shadows & lighting, and shaders. Everything looks amazing.

Unreal Engine 4 uses C++ and comes with Blueprint, a visual script editor.

I haven’t worked with it too much yet, so I can’t elaborate more.

Less cross-platform compatibility: Windows PC, Mac OS X, iOS, Android, VR, Linux, SteamOS, HTML5, Xbox One, and PS4.

Closing notes

Virtual reality is a very young medium. As pioneers, we still have a lot to learn and discover. That’s why I am very excited about it and why I joined this team. We have the opportunity to explore and we should, as much as we can. Understand, identify, build and iterate. Over and over.
And over again…

Resources

Community

  • Immersive design Facebook group

Videos

  • Google I/O 2015 — Designing for Virtual Reality
  • Oculus Connect keynotes
  • VR Design: Transitioning from a 2D to 3D Design Paradigm
  • VR Interface Design Pre-Visualisation Methods
  • 2014 Oculus Connect — Introduction to Audio in VR

Tutorials

  • Cinema 4D tutorials
  • Unity 3D tutorials
  • Maya and 3D tools tutorials

Articles

  • LeapMotion — VR Best Practices Guidelines
  • The fundamentals of user experience in virtual reality
  • Ready for UX in 3D?

Thanks everyone who helped me with the rereading and improvements 💖

From product design to virtual reality

LYNN MERCIER: Thetruth is, like, if we want to evolve thematerial design system, we need to be able tobuild on top of the code, and each layer ofthat code matters.

MUSTAFA KURTULDU:The conveyor belt is-- designer works onsomething, developer takes it, and developer screams becausethere was no conversation.

I think that's one ofthe biggest challenges.

[MUSIC PLAYING] One of the challenges inthe beginning with material on the web was there's so manydifferent implementations.

Again, the singlesource of of truth, so you had Angular material,you had Polymer, you had MDL.

How have you found solvingthat single source of truth? LYNN MERCIER: Yeah.

Originally we had a unique teamof developers in both Android-- I'm sorry-- Angular,and Polymer, and all these otherweb frameworks sort of building their ownimplementations in material design.

But we found that we couldn'tkeep that going at scale.

Like, we were iterating onthe material design system so quickly, and we couldn'tkeep a single source of truth with these othercomponent libraries.

So we started developinga technology where we would write our JavaScriptonce and sort of abstract its interaction with theHTML, so you wouldn't directly reference a DOM element.

And then we wouldwrap that JavaScript in individualcomponent libraries.

It's not a perfect solution.

We're still working onmaking it faster and better, but we've foundthat that creates these sort of componentsthat look like they belong in the framework.

So any framework developerwho's working there, they look seamlesslylike they're a part of the environment.

MUSTAFA KURTULDU: The onething that we struggled with with MaterialDesign Lite was there was a lot of blackmagic going on in the DOM.

So you check DeveloperTools, and there'll be, like, these random elements.

And that was, like, anopinionated decision so, you know, how do you go aboutdeveloping a new framework where you have to have anopinion-- there has to be, like, this is the baselineof what we're doing-- without impeding on, like, whatthe developer just wants to do? They just want this componentto work, or this widget, or whatever.

LYNN MERCIER: Yeah.

I try as much as possibleto avoid black magic.

And, like, whenever I'mreviewing any code that any of the designers on myteam are writing, we, like, try and avoid anything that's-- maybe it's a little hack,and it makes it slightly more performant-- butthe truth is, like, if we want to evolve thematerial design system, we need to be able tobuild on top of the code, and each layer ofthat code matters.

So we try and, like, steeraway from any black magic and just have thisone source of truth that works with all thecomponent libraries as much as possible.

MUSTAFA KURTULDU:In terms of, like, working withexisting frameworks, what's the relationship there? Because, like, React isa thing-- you have to-- it's the real world, right? LYNN MERCIER: Mm-hmm.

MUSTAFA KURTULDU: Or likeWordPress is a thing.

Like, you have towork in that world.

LYNN MERCIER: Mm-hmm.

MUSTAFA KURTULDU: So theremay be certain things that you can or can'tdo as a result of that.

LYNN MERCIER: Yeah.

MUSTAFA KURTULDU: Like, formaintaining a framework where-- it's not Android.

It's not, like, a single-- LYNN MERCIER: Yeah.

MUSTAFA KURTULDU:--you have-- it's, like, the web is allabout relationships between different code bits.

I mean, how do you manage that? LYNN MERCIER: It gets reallytricky and really funny.

So we tend to prioritizethem in terms of what developers are already using.

So React is a great example.

There are a ton of codebases already in React-- it's super-popular.

So we want to prioritizethat one first, which is why we're making anMDC React library for React in particular.

But then there'sother libraries, like Angular andlike Polymer that we want to start using as well.

But we tend toprioritize them, again, based off whether or notdevelopers are already using them.

In terms of, like, keepingall that functioning-- and sometimes you endup, like, one framework wants it to do it oneway and another framework wants it to do another way.

It's just constantlycompromising.

Like, we work with thesedevelopers on the Polymer team, or we, like, talk tothe React community and try and figure out what'sthe right way to figure it out.

And we just sort of settleon the right compromise and stay there.

We do it as well with browsers.

So for example, we tendto develop first on Chrome because it's kind of thebest, and it works nice, but we have to support Safari,and Firefox, and Edge as well.

So we tend to testIE at the very end.

And we want it towork, but there's sort of, like, gracefuldegradation sort of things that happen.

As long as that happens, like,carefully and gracefully, then it tends to be OK.

And I think we do the samesort of thing with platforms.

You know, maybe itdoesn't perfectly work in everyplatform but as long as we can kind ofgracefully degrade that component in thatsituation, it'll work out.

MUSTAFA KURTULDU: Yeah, Iknow the BBC have, like, a term that's calledcutting the mustard.

So basically, they willhave, like, a baseline where things have to work.

LYNN MERCIER: Yeah.

MUSTAFA KURTULDU: Like, withthis, and if it doesn't work, or it doesn't supportthis technology, they're gonna say--you know what, you're not going to getthese experiences that we're designing.

I mean it-- how would youfeel about that as a concept? LYNN MERCIER: Yeah, we'vehad to use that already.

So there's newthings coming out-- material designed around shape.

And on the web platform,no matter what technology you're in-- like, what webplatform or what browser you're in-- rounded corners are really easy.

Like, cut-off corners? Impossible.

Just straight up impossiblewith the existing technology.

And so we kind of had to goback to our material design team and say, like-- look, we can update theCSS spec today in 2018, and then three years fromnow, our children's children will, like, have thisfeature, but we're not going to be able toimplement it right now.

So there are some featureswhere you just kind of have to draw the line and say,we can't do this feature without it beinga confusing story, without it being some sortof hack that no one would be able to use.

MUSTAFA KURTULDU:So how about SVG? I mean, I suppose whenit comes to animation, the challenge of SVG isthe performance 'cause-- LYNN MERCIER: SVGs, and thenthe shadows on top of them, and the scroll performanceunderneath those SVGs-- by the time you, like, transportall the browsers and all the situations wherethat component would be, it gets reallyconfusing quickly.

MUSTAFA KURTULDU:And very complicated.

LYNN MERCIER: Yeah,very complicated.

MUSTAFA KURTULDU: That'squite interesting, because the conveyor belt isdesigner works on something, developer takes it, anddeveloper screams because there was no conversation.

I think that's one of thebiggest challenges developers face because, ifyou just talk to me, then I'll be able to explain,especially for designers who have no coding experience.

And I know we've spokenbefore, and you've mentioned stress testingthe design, which is a new concept for me.

LYNN MERCIER: It's my concept.

MUSTAFA KURTULDU: How doesthat work, where you're stress testing the design? LYNN MERCIER: Imean, I think there's a limitation inDesigner Tools that make them want to forceeverything to sort of this pixel-perfect mock.

And it's gorgeous-- itcreates some gorgeous assets, but it doesn't always workin a real-world application.

And a developer'sjob is to create something that works in areal-world application, right? Ours is the stuff-- thecode that's running live.

And so many problems come froma design being pixel-perfect for one language, one screenwidth, one set of content.

And when you goto build that, you can build sort of adummy site quickly, but once you start populatingit with real content, all these problems come up.

And I think most designers,if you go and talk to them and say, like-- hey,I have this problem.

They'll help you.

They'll, like, show youhow to change the design and tweak it in this situation.

Like, they're very receptiveto that feedback-- they want to make their designs better.

But if you don't knowwho your designer is when you have thisproblem, then you just have this bug that says-- doesn't work in German.

Like, what do you do? You have no idea how to fix it.

So yeah, I think this conveyorbelt problem of designers who sort of, like,design something but then leave the projectand don't collaborate with the developers asthey're building it, it makes it reallydifficult for the developers to make the productbetter over time.

MUSTAFA KURTULDU: So how do youthink designers can actually improve their process to makethat relationship better? Or more, is it reallydown to the most obvious? You just need to pairprogram, or pair together.

You have to talk to the person.

That's really the bestway to do it, so like-- LYNN MERCIER: Thatis the best way.

I mean, I think that can bereally difficult in certain-- if you don't have enoughtime and resources, sort of dedicate, like, one person,one designer, and one developer to every single feature.

I think there's waysin the middle to do it.

So for one, make sure thatyou know each other's names.

Like, if you'reremote, make sure you know how to deployyour code somewhere to staging so yourdesigner can work with it, and make sure your designerhas a way to send you, like, iterations on mocks.

Another sort of quick andobvious thing, I think, for designers is tointernationalize.

The moment you take allthe text from your mock, put it in Google Translate,put it back in the mock, and see what looks horrible-- MUSTAFA KURTULDU: German.

LYNN MERCIER: --like, yeah! German! Or even, like, CJK languages.

Just pick a language.

It doesn't matter ifyou translate it right.

Just, like, do thatfirst step because you're going to run into all thewidth and height problems that a developerwill run into live.

And I think it's goodfor designers, right? It helps you makeyour product better to get feedback aboutwhat sort of languages do I need to support.

MUSTAFA KURTULDU:And it's especially important inuser-generated apps where the content could be 10 pages,or it could be two lines.

LYNN MERCIER: Yeah! MUSTAFA KURTULDU:It's not like-- you always get the mock wherethere's, like, the name-- LYNN MERCIER: Yeah! MUSTAFA KURTULDU: --theavatar name's perfect.

LYNN MERCIER: Fits perfectly.

MUSTAFA KURTULDU: Yeah, butwhat if the name's like, you know, four words long? LYNN MERCIER: Yeah.

MUSTAFA KURTULDU: Isthere anything else that they can do like the stresstest that wasn't just really-- LYNN MERCIER:Internationalization is a big one.

I think different screen widths,at least in your own web, is helpful as well,like making sure that the obviousbreakpoints work but also sort of smallerones or bigger ones.

But, yeah, it justcomes back to, like, be there when your developerruns into a real problem and help them fix that problem.

I think most developerswant to fix problems.

They want to code that out.

They just want to geton their headphones-- like, get the code outthat will fix the problem, but they don't know how toredesign the site, right? We're not going to-- ifyou make a developer guess how to design a site, we'regoing to guess really poorly.

So you need to helpus as designers.

SPEAKER: If you spentloads of time polishing your, like, amazingprototype, then you suddenly becomevery, like, you know, reticent to throw it away.

Kind of like it'syour baby, you're going to polish this too much.

And so that's dangerous,because then you're not using prototyping forprototyping's real purpose, which is to learn.

web design for kids

Sketch was made for screen-based design.
Websites, app interfaces, icons… these objects of design exist within a world of pixel measurements, RGB colors, and presentation on digital screens. Unlike many of the Adobe creative tools which include 10,000 features and the kitchen sink, Sketch is laser-focused in its purpose—and consequently works far better (and more efficiently) for what it does do.

Sketch was not made for print-based design.
Business cards, brochures, posters… these exist within a physical world of inch/centimeter/point/pica measurements, CMYK or Pantone colors, and presentation on a variety of papers and materials. Adobe Illustrator and InDesign are two of the most popular tools in this arena.

If you’re like me, you’re far more efficient working in Sketch.

And when a print design project rolls around, you might find yourself yearning to continue using the same tool you’ve become so adept at using for web/UI design. I want you to know that it’s possible. Here’s how I do it:

(full disclosure: Adobe Illustrator is required)

The Magic Number 72

Dating back to the craft of setting lead type for a printing press, the primary units of measurement were points (72 per inch) and picas (12 per inch). Lead type (pictured here) is measured in points, and is produced in pica or half-pica increments such as 12, 18, 24, 36, and 72 points. Those numbers should sound familiar to you, as they became standard digital font sizes with the Macintosh. The first Macs used screens where every inch contained 72 pixels, resulting in 12pt text that looked practically the same size onscreen as in print. The evolution of pixels per inch (PPI) is too extensive for this article (especially since the advent of retina displays), although it’s important to know a bit about the origins of this 72:1 ratio.

This article will mostly use inch measurements, as used for print design in the US. If you are familiar with a centimeter workflow, I’d love to hear from you!

Sketch measures everything in pixel units, so we need a way to convert our design to the physical world of inches. By now you may have guessed where this is going: 72 pixels in Sketch converts to 1 inch in an exported PDF.

  • An 8.5" × 11" piece of paper (US Letter) converts to a 612px × 792px artboard.
  • A typical 3.5" × 2" business card converts to a 252px × 144px artboard.
  • When adding a new artboard, Sketch 3 gives you a few “Paper Sizes” presets. Speed things up by adding your own custom artboard presets!

The pixel dimensions of a 72 PPI layout may be far smaller than you are used to when working on websites or user interfaces. Remember that the clarity of your print project is dictated by the print method you use—Sketch’s “Show Pixels” function is of no use here!

Tips for Designing Your Layout

  • For elements in your design, try to use measurements that make sense in inches. 1px = 1pt for lines and font-sizes. I’ll often use 1/8 inch (9px) or 1/16 inch (4.5px) increments for layout elements.
  • You can use Sketch’s Grid feature to make these inch-appropriate positions or measurements easier. I suggest a grid with a 9px (1/8 inch) block size and thick lines every 8 blocks (1 inch). Show/hide the grid with ⌃G on your keyboard.
  • You can turn off “Pixel Fitting” in Preferences. There’s no need to be a stickler for pixel alignment as you would be for screen-based design.

Margins & Bleeds

Professional print shops often require your artwork to have extra space on all sides, extending any parts of your design that “bleed” out to the edge (see example below). This compensates for the slight, yet inevitable, variance in where the edges are cut on your final print. My printer asks for a 1/8 inch bleed, and I often add this to my Sketch layout (9px extra on all sides). If your design has elements that bleed, I suggest you do the same—if not, you can easily add these extra margins later when saving a PDF from Illustrator. Printers will also recommend that any text is at least 1/8 inch inside the trim lines (a “safe zone” or “critical print area”), as in the business card below.

The “Trim Lines” indicate what the final card will look like. Because trimming is rarely 100% accurate, any parts of the design that extend to the very edge should continue out to a “Bleed”. Shown here, the bleed extends to 1/8 inch outside the artwork.

Preparing the File for Print

99% of print shops are strict about the specifications of your “artwork” files. The following process will help you give printers the files they want! If your layout relies heavily on images, gradients, or shadows, skip to the next section!

When you have finished your design in Sketch, export it as a PDF at 1x scale. Many programs, such as Preview or Adobe Illustrator will automatically interpret the file at 72 PPI. You can view the PDF’s dimensions in inches in Preview (Tools > Show Inspector, ⌘I), or in pixels using Finder’s Get Info window (under “More Info”). If you save your PDF through Illustrator, pixel and inch dimensions will be automatically included in the file.

There are 2 other things we need to change about Sketch’s exported PDF:

  1. Text needs to be “Converted to Outlines”.
  2. The colors need to be CMYK values instead of RGB.
  3. Any images in the design need to be embeded as CMYK images.

Converting Text to Outlines

To ensure that your design is printed exactly how you see it on your computer, it is important to convert the text objects in the PDF to actual vector shapes, or “outlines”. This makes the text look exactly the same on any program on any computer, regardless of the fonts you’ve used in the design, and regardless of whether or not those fonts are installed on the printer’s computer.

You can convert text to outlines in Sketch (more about that here), although if your design has more than a few lines of text, Sketch will slow down dramatically. If you want a guaranteed way to crash Sketch, try selecting a dozen text objects and converting them to outlines all at once! Fortunately, Adobe Illustrator excels in this department, so we’ll use that instead.

  • Open the PDF in Illustrator and navigate to Select > All (⌘A), from the menu bar.
  • Also in the menu bar, navigate to Type > Convert Text to Outlines (⌘⇧O). Easy as that!

Converting to CMYK Colors

After opening your PDF in Illustrator, navigate to File > Document Color Mode > CMYK Color. This converts the entire document to a CMYK colorspace from RGB. That’s the easy step. Now we have to change the colors in our design to actual CMYK values.

If you’re used to screen-based design and appreciate great colors, I feel obligated to tell you that CMYK may disappoint you. Due to the nature of combining those 4 colors (cyan, magenta, yellow, and black) in ink, many bright and saturated colors are difficult or impossible to recreate. Without diving into color theory or the pros/cons of various print methods, I will simply suggest that for any color that is important to your design you see a sample of that exact color value from a similar printer on a similar material. To do this I recommend choosing a close match on a Pantone swatchbook (a bit pricey, but a great investment), or ask your printer for a printed sample of a variety of colors printed on the paper you’ll use (they probably already have these, and can give you each color’s CMYK value).

Once you’ve chosen great CMYK values for all your colors, it’s time to replace the color value for each of the elements in your design. This sounds tedious—and to a certain extent it is—but I’ve discovered a few shortcuts to help you!

  • First off, you will need to select the elements whose colors you want to change. If you aren’t familiar with Illustrator, know that a layer is only selected when you click the small circle to the right of it. Simply clicking on the layer’s name will not do anything!
  • If your design has many elements with the same color (say, all green text), they can be selected all at once by first selecting one instance of the element then clicking the “Select Similar Objects” button on the right of the toolbar. If this toolbar or button isn’t available, try navigating to Select > Same in the menu bar.
  • When your elements are selected, hold down the Shift key when you click on the fill color in the toolbar (fill color to the left, stroke/border color to the right). Even elements that are pure black need to be converted to CMYK black, for which there is a little swatch below the color sliders.

Last Step!

When all of your text has been converted to outlines and all of your colors are CMYK, it’s time to save a separate PDF (I add “-print” as a suffix to the new filename). By using File > Save As, you get a trillion options for the PDF. The single option I ever use is to add a bleed margin (my printer likes 1/8 inch) on all sides of the artwork. To do this, go to the “Marks and Bleeds” section on the left and uncheck “Use Document Bleed Settings”, as shown below.

You’re all done! Trust me, next time this process will take you half as long!

Is Your Design Image-Heavy?

If your Sketch design includes bitmap images (non-vector images), they will be automatically converted from RGB to CMYK when you change the Document Color Mode. Upon importing the PDF to Illustrator, any shadows in your design will be converted to bitmap images and any gradients will become un-editable “Non-Native Art”. Because of this, if images, shadows, or gradients are important to your design, I strongly suggest you instead save the entire Sketch layout as a PNG and convert it to a CMYK file in Photoshop using the following steps.

  1. Export the Sketch artboard as a PNG at 4.166x scale, which gives you the amount of pixels you’ll need for a 300 PPI print-ready file. Printers rarely accept bitmap images less than this resolution. Make sure your artboard includes the necessary bleed margins (described above) before export.
  2. Open the PNG in Photoshop and navigate to Image > Image Size, in the menu bar. Uncheck the “Resample” checkbox and type in either the artwork’s dimensions in inches or the “Pixels/Inch” you used when exporting from Sketch (again, this is often 300 PPI). Click “OK”.
  3. In the menu bar, navigate to Image > Mode > CMYK Color. This will alert you that Photoshop is converting the file to a default CMYK color profile. This step may visibly change the colors of your design. Rest assured that your computer screen is not an accurate representation of colors in print, although you should also not expect the same bright or saturated colors capable with RGB (as described above).
  4. Adjust the colors slightly if you desire, then Save As a .psd or .tif file. Be sure to tell the printer what bleed margins you included in the artwork!

Of course you can use this process in conjunction with the PDF + Illustrator workflow above, by embedding the Photoshopped images into your Illustrator document. But most of the time I stick to one process or the other.

Is This Workflow Right for You?

If you’re fast at designing in Sketch, feel more at ease or more creative using it, or aren’t very familiar with Illustrator/InDesign, this may be good for you. This may also be a useful workflow if you have existing designs from Sketch (an interface, icon, logo) that you want to prepare for professional printing. I can’t read the future, but with Bohemian Coding’s small team and success focusing on screen-based design, I don’t advise you to hold your breath for print features. It’s a huge can of worms!

Examples of projects made with this workflow. From packaging, to letterpressed business cards, to laser-engraved signage. This work for Juice Shop recently won the Type Directors Club’s prestigious annual design competition.

I’ve written this article to share my workflow for print design projects, but also to learn of ways that I might improve this workflow in the future. If you have any suggestions, especially related to Illustrator or the print process, feel free to share them!

Be the first to know when I publish new design articles and resources.
 
I just released Sketch Master — online training courses for professionals learning Sketch. You’ll learn tons of tricks and practical workflows, by designing real-world UI/UX and app icon projects.

Sketch Master
Sketch Master is a collection of video training courses for professionals learning Sketch—the popular design tool. sketchmaster.com