website builder Franklin Massachusetts

Franklin website designers

web design yearly salary

Hello and welcome to this website designers Web Designer Franklin video tutorial.

I’m Owen Corso from Google.

web design dimensions web design games

And today, we’re going to build a rich media expandable creative with video.

Let’s start by selecting file, New File.

This opens a dialog box where we will set up our ad.

First, let’s make out high of project.

We have four options– The default is Display & Video 360so we will leave that as is.

web design jobs remote

How to Be Comfortable in the Dentist's Chair

Backstory

I am a product designer at Google, and I joined the company through Sparrow, a French startup that got acquired on July 20, 2012. Since then, I worked with the Gmail team to build from scratch a flagship product that became Inbox by Gmail. It shipped on October 22, 2014.

I designed productive applications for a few years, and I felt like I reached a tipping point. I wanted to expand my skill set, learn new things every day and get better at something I’ve never touched. I needed new challenges to reboot myself by leaving my comfort zone.

I got interested in virtual reality around the Oculus Kickstarter period because of the immersiveness and endless possibilities that came with it. There is nothing more exciting than building for a new medium and exploring an uncharted territory.

I joined the Google Cardboard and Virtual Reality team on April 17, 2015. Thanks to Clay Bavor and Jon Wiley for this great opportunity.

Another dimension

My first weeks in the team were as scary as it can get. People used words I had no idea of and asked me questions I didn’t know how to answer.

I am not going to lie, ramping up on the jargon was not easy but I was expecting that. Virtual reality is a deep field (pun intended) grouping together a variety of job titles each requiring a very specialized skill set. The first weeks were intense and day after day I had a better vision of the big picture. Slowly, the pieces came together. I found out which roles would be the best fit, what I wanted to do and what was required to get there. Regardless of the mission, I knew I would have to learn a lot, but I was prepared for this challenge. My feelings varied from one day to another. From super excited to create and learn something new, to super scared because of the colossal knowledge I still had to learn. Working with smart and knowledgeable people around me reinforced these mixed feelings.

Everything is going to be alright

I told myself and firmly believed that the dots would connect eventually. I am a passionate person, and I knew that I didn’t mind spending hours learning and experimenting.

During my product designer career, I got better at understanding, identifying and resolving user problems. Making things easy to use and delight users is not that different, no matter the medium.

The core of the mission is the same, but to get you from point A to B there are some interesting things to know.

  • Sketching, is, again, at the core of everything. During any brain dump or design phase, sketching is as fast as it can get. I’ve sketched more in the time I’ve joined this team than I have in my entire career.
  • Any design skills as diverse as they are will be a huge benefit.
  • Photography knowledge will help you because you will interact with concepts such as field of view, depth of field, caustics, exposure and so on. Being able to use light to your advantage has been much valuable to me already.
  • The more you know 3D and tools, the less you will have to learn. It’s pretty obvious but be aware that at some point, you might do architecture, character, props modeling, rigging, UV mapping, texturing, dynamics, particles and so on.
  • Motion design is important. As designers, we know how to work with devices with physical boundaries. VR has none, so it’s a different way of thinking. “How does this element appear and disappear?” will be a redundant question.
  • Python, C#, C++ or any previous coding skills will help you ramp up faster. Prototyping has a big place because of the fundamental need of iterating. This area is so new that you might be one of the first to design a unique kind of interaction. Any recent game engine such as Unity or Unreal engine largely integrates code. There is a large active community in game and VR development with a huge amount of training and resources already.
  • Be prepared to be scared and get ready to embrace the unknown. It’s a new world that evolves every day. Even the biggest industry-leading companies are still trying to figure things out. That’s how it is.

Roles

Design teams will evolve because this new medium opens a lot of possibilities for creation. Think about the video game or the film industry for instance.

I think there will be two big design buckets.

The first one will be about the core user experience, interface, and interaction design. This is very close to how product design team are structured today (Visual, UI, UX, motion designers, researchers, and prototypers).

Each role will have to adapt to the rules of this new medium and keep a tight relationship with engineers. The goal will always remain the same; create a fast iteration cycle to explore a wide range of interactive designs.

On the other hand, content teams will replicate indie and game design studio structure to create everything from unique experiences to AAA games. The entertainment industry as we know it in other mediums will likely be very similar in VR.

Ultimately, both will have a close relationship to create a premium end to end experience. Both industries have a great opportunity to learn from each other.

To wrap up on my personal experience, I think being a product designer in VR is not that different but requires a lot of dedication to understand and learn a vast field of knowledge.

First step and fundamentals of VR design

First step

In this second part of the article, I will try to cover the basics you need to know regarding this medium. It’s meant to be designer oriented and simplified as much as possible.

Let’s get (a little bit) technical

The new dimension and immersiveness is a game changer. There is a set of intrinsic rules you need to know to be able to respect physiologically and treat your users carefully. We regrouped some of these principles in an app so you can learn through this great immersive experience.

Download Cardboard Design Lab

You can watch Alex’s presentation at I/O this year which goes more in-depth. The following is a small summary.

If you have to remember just two rules:

  • Do not drop frames.
  • Maintain head tracking.

People instinctively react to external events you might not be aware of, and you should be designing accordingly.

Physiological comfort. It regroups notions like motion sickness. Be careful when using acceleration and deceleration. Maintain a stable horizon line to avoid the “sea sickness” effect.

Environment comfort. People can experience various discomforts in certain situation like heights, small spaces (claustrophobia), big spaces (agoraphobia) and so on. Be careful with the scale and colliding objects. For example, if someone throws an object at you, you will instinctively try to grab it, dodge or protect yourself. Use it to your advantage and not to user’s disadvantage.

You can also use user senses to help you create more immersive products and cues. You can find inspiration in the game industry. They use all sorts of tricks to guide users during their journey. Here’s a couple:

  • Audio for spatial positioning.
  • Light to show a path and help the player.

Do not hurt or over-fatigue your users. It’s a classic mistake when you start to design for this medium. As cool as it looks, Hollywood sci-fi movies fed us with interactions that goes against simple ergonomic rules and can create major discomfort over time. Minority Report gestures are not suitable for a long period of activity.

I made a very simplified illustration of XY head movement safe zones. Green is good, yellow is ok and avoid red. There are a some user studies made public (links at the bottom of the page) that will give you more in-depth information about that topic.

A simplified illustration of XY head movement safe zones.

Bad design can lead to more serious conditions.

As an example, have you heard about Text Neck? A study, published in Neuro and Spine Surgery measured varying pressures in our neck as our head moves to different positions. Moving from a neutral head position looking straight ahead to looking down increase the pressure by 440%. The muscles and ligaments get tired and sore; the nerves are stretched, and discs get compressed. All of this misbehavior can lead to serious long-term issues such as permanent nerves damages.

TLDR; Avoid extended look down interactions.

Degrees of freedom

The body has six different ways of moving in space. It can rotate and translate in XYZ.

3 Degrees of freedom (Orientation tracking)

Phone based head mounted device such as Cardboard, Gear VR are tracking the orientation via an embedded gyroscope (3DOF). Rotations on all three axes are tracked.

6 Degrees of freedom (Orientation + Position tracking)

To achieve six degrees of freedom, the sensor(s) will track positions in space (+X, -X, +Y, -Y, +Z, -Z). High-end devices like HTC Vive or Oculus Rift are 6DOF.

Tracking
Making 6DOF possible frequently involves optical tracking of infrared emitters by one or more sensors. In Oculus’s case, the tracking sensor is on a stationary camera, while in Vive’s case the tracking sensors are on the actual HMD.

Oculus and Vive lighthouses position tracking

Inputs

Depending on the system you are designing for, the input method will vary and affect your decisions. For example Google Cardboard has a single button, that’s why the interaction model is a simple gaze and tap. HTC Vive uses two, six degrees of freedom controllers and Oculus will ship with an Xbox One controller but will eventually use a 6DOF dual controller “Oculus Touch”. All of them allow you to use more advanced and immersive interaction patterns.

The good old Xbox OneOculus Touch

There are also other kinds of inputs such as hand tracking. The most famous being Leap Motion. You can mount it to your Head Mounted Display (HMD).

Leap Motion on top of a DK2

This area constantly evolves as technology catches up but as of today, hand tracking is not reliable enough to be used as the main input. The principal issues are related to hands and fingers, collisions, and subtle movements tracking.

Even though it’s very familiar, using a game controller is a disappointing experience. It physically removes some of the freedom VR is creating. In FPS, strafing and moving might usually cause some discomfort because of the accelerations.

On the other hand, the HTC Vive controllers reinforce the VR experience thanks to the 6 degrees of freedom, and Tilt Brush is a really good example. As I am writing theses lines, I haven't tried the Oculus touch but every demo I have seen looks very promising. Check out Oculus Toybox demo.

While designing user interfaces and interactions, inputs are the keystone that will drive some decisions differently depending on which method you are using. You should be familiar with all of them and aware of their limitations.

Tools

This is a big piece and might require a more in-depth article. I will focus on the most popular tools used in this industry.

Pen and paper

You just can’t beat them. It’s the first tool we use because it’s always around and does not require too many skills. It’s a proven way to express your ideas and iterate at a fast and cheap pace. Theses factors are important because, in VR, the cost of moving from wireframes to hi-fi is higher than 2D.

Sketch

I still use it every day. Because of its ease of use, it’s the perfect tool that allows me to create a lot of explorations before moving to a VR prototype. It’s also handy for its export tools and plugins that are a huge time saver. If you are not familiar with that program, I wrote articles here and there.

Cinema 4D

I don’t see C4D as a competitor of Maya. Both are great tools, and each excels in its own way. When you don’t have a 3D background, the learning curve can be very steep. I like C4D because the interface, the parametric and non-destructive approach make sense for me. It helps me create more iterations quickly. I love the MoGraph modules, and a lot of great plugins are available. The community is very active, and you can find a lot of high-quality learning materials.

Cinema 4D motion explorations

Maya

Maya is colossal in a good and a bad way. It does anything and everything a 3D artist needs. Most of the games and movies are designed with it. It’s a robust piece of software which can handle massive simulations and very heavy scenes with ease. From rendering, modeling, animation, rigging, it’s simply the best tool out there. Maya is highly customizable, and that is one reason why it’s the industry standard. Studios need to create their own set of tools, and Maya is the perfect candidate to integrate any pipeline.

On the other hand, learning all the tools will require your full and unconditional dedication for quite some time. I mean weeks of explorations, months of learning and years of practice on a daily basis.

Unity

It’s most certainly THE prototyping tool where everything will happen. You can easily create and move things around with a direct VR preview of your project. It’s a powerful game engine with a great community and a ton of resources available in their store (the asset author determines the pricing). In the assets library, you can find simple 3D models, complete projects, audio, analytics tools, shaders, scripts, materials, textures and so on.

Their documentation and learning platform are stellar. They have a wide range of high-quality tutorials.

Unity3d uses mainly C# or JavaScript and comes with Microsoft Visual Studio but doesn't come with a built-in visual editor even though, you can find good ones in the assets store.

It support all major HMD and is the best to build for cross-platform: Windows PC, Mac OS X, Linux, Web Player, WebGL, VR(including Hololens), SteamOS, iOS, Android, Windows Phone 8, Tizen, Android TV and Samsung SMART TV, as well as Xbox One & 360, PS4, Playstation Vita, and Wii U

It supports all major 3D formats and has the best in 2D game creation. The in-app 3D editor is weak, but people have built great plugins to correct that. The software is licensed based, but you can also use the free version to a certain extent. You can check the details on their pricing page. It’s the most popular game engine out there with ~47% of market share.

Unreal Engine

The direct competitor of Unity3D. Unreal also has great documentation and videos tutorials. Their store is smaller because it’s much newer.

One of the big advantages over the competition are the graphic capabilities; Unreal is one step ahead in nearly every area: terrain, particles, post processing effects, shadows & lighting, and shaders. Everything looks amazing.

Unreal Engine 4 uses C++ and comes with Blueprint, a visual script editor.

I haven’t worked with it too much yet, so I can’t elaborate more.

Less cross-platform compatibility: Windows PC, Mac OS X, iOS, Android, VR, Linux, SteamOS, HTML5, Xbox One, and PS4.

Closing notes

Virtual reality is a very young medium. As pioneers, we still have a lot to learn and discover. That’s why I am very excited about it and why I joined this team. We have the opportunity to explore and we should, as much as we can. Understand, identify, build and iterate. Over and over.
And over again…

Resources

Community

  • Immersive design Facebook group

Videos

  • Google I/O 2015 — Designing for Virtual Reality
  • Oculus Connect keynotes
  • VR Design: Transitioning from a 2D to 3D Design Paradigm
  • VR Interface Design Pre-Visualisation Methods
  • 2014 Oculus Connect — Introduction to Audio in VR

Tutorials

  • Cinema 4D tutorials
  • Unity 3D tutorials
  • Maya and 3D tools tutorials

Articles

  • LeapMotion — VR Best Practices Guidelines
  • The fundamentals of user experience in virtual reality
  • Ready for UX in 3D?

Thanks everyone who helped me with the rereading and improvements 💖

Franklin website designers

Next, we can select the type of ad.

We want to make an expandable, so we select Expandable on the left.

Next, we can set again ad’s dimensions.

We are building a 320 by 50that expands to 480 by 250.

So I will make those changes.

We then assign the Franklin creative a name.

I will leave my Save ToLocation as the default, and leave the talk about set to Quick.

Once I’m happy with my settings, I click OK.

Google Web Designer creates the initial pages of the ad for me with the dimensions I defined.

 

website designers Franklin

The collapsed page already contains a Tap Area event to expand the ad and an expanded pageFranklin with a close tap area to collapse back down.

web design yorke peninsula

Data Driven Templates for Display & Video 360 Ad Canvas - Google Web Designer

web design books

Ant (GitHub) is much more than a React UI kit with a minimalist design aesthetic and every component under the sun. It is a rabbit’s hole that leads to a giant maze of interconnected libraries, with a serious ecosystem surrounding it. There’s a custom build tool based on Webpack called ant-tool, several CLI apps, community scaffolds, and a complete framework (dva, which has its own CLI as well). And the UI components are mini-projects in and of themselves — see this repo for information on each component.

Many of these libraries appear to be very polished, including an entire React animation library. And I’d love to learn more about them, but Ant comes with a challenge — the majority of the documentation is in Chinese.

How’s Your Chinese?

Let me preface this by pointing out that the components library and its terrific style guide have been translated into English by generous volunteers, so the UI kit is completely usable. And the translation effort demonstrates the project’s intentions to open up Ant to a wider audience, boding well for companies considering adopting it.

However, there are some language issues that remain. The English is sometimes confusing or obscure. The maintainer of the library has commented here that they welcome PRs for improving the documentation, so that could be a great way to get involved in this amazing project.

Good luck hunting down issues!

Another issue is that issues in Ant.Design are mostly filed and debated on GitHub in Chinese. This could be a deal breaker for enterprise applications, but I’m not sure it should be one for early startups since Ant can be used quite minimally, without making use of smarter features like built-in form validation. Still, if you find an issue or bug with the library, it will be difficult to research previous solutions to your issue, and that’s why I recommend making minimal use of the surrounding ecosystem at this stage.

Battle Tested

Popular UI libraries for React include Material-UI, Semantic-UI, Foundation, and Bootstrap (this and this), and they are all fairly mature. Material-UI should be singled out as it massively eclipses the others in popularity, with over 22k stargazers — and over 600 open issues. But it turns out that Ant.Design is a surprisingly worthy candidate as well. It’s battle tested by some of the most well-trodden sites on the web (Alibaba, Baidu), and it boasts a brilliant style guide, custom tooling, and, of course, a comprehensive catalogue of components. It also has only 85 open issues at the time of writing, which is a good thing considering its popularity.

So let’s take a tour of the library, see what it has to offer, and how to get started using it.

Ant Components

The Ant components list is dizzying. Sure, it contains the basics — modals, forms (inline and vertical), navigation menus, a grid system. But it also contains a ton of extras, such as a @mentioning system, a timeline, badges, a seriously nice table system, and other small fancy features, such as an involved address box (see the Habitual Residence field). Have a look — it has everything that a modern web application should, with a tasteful, minimalist aesthetic.

Design Principles

There’s a nice, concise section in the documentation on the guiding principles of Ant.Design. I found it a great read as it got me thinking a lot about UI/UX considerations, especially the “Provide an Invitation” section, where they discuss different ways of making interactions discoverable by a user. By the way, if anyone can recommend me a good book on UX, I would be grateful.

Grid System

The Ant layout system is comprised of a 24-aliquot (a great new word that I learned from the translated documentation — it means parts of a whole) grid and a separate Layout component than you can choose to use. The grid uses the familiar Row/Col system, but you can also specify a prop called flex which allows you to harness Flexbox properties to define a responsive UI. (See a previous blog post of mine for help grokking the Flex standard.)

Flexbox is now fully supported on just about every browser (with partial support on IE 11 as well as some older mobile browsers), so it should be fine to use. If your customer base is largely Internet Explorer users, which does happen in some industries or countries, you would be wise to abstain from using flex Rows or the Layout component, as Layout is built strictly on Flexbox.

Layout includes components for a Sider, Header, Content, and Footer. Again, these are strictly based on Flexbox, so there’s no choice here — but to be honest I’m not sure what these components give you on top of using the standard Row/Col grid system, aside from a couple extra props you can make use of and possibly some built-in design choices. All in all, it doesn’t seem to me to be hugely useful.

Grid Props

Col elements can be supplied with a span prop to define how many aliquots a column takes up and an offset prop to define an optional offset; Row can take a gutter prop to define space between columns in a row (in pixels, not aliquots).

Here’s a UI example from a side project of mine. It contains one row with two columns:

The code would look something like this:

Forms

Ant does not let you down as far as forms are concerned, with options for inline, horizontal, and vertical forms, amazing select boxes, and clear validation messages and icons. In fact, it goes a little overboard here. It allows you to wrap your entire form-rendering component in a higher-order component à la Form.create()(<Component />) to gain access to a built-in validator syntax and custom two-way-binding system (cue audible lip biting). You can then specify standard rules such as ‘required’, or supply custom validator methods. (What are Higher Order Components? Check out this excellent post by James K. Nelson.)

Do you need to use their HOC? Absolutely not, and I’m not sure you should. As I said above, going down that path could expose you to language risk should you encounter bugs and I don’t see why you would want to use a custom two-way binding data system anyway. But you could easily use the HOC and just not use the two-way data binding.

Au Naturel — Plain React Forms

So let’s go over how to use the Ant validation messages without using their higher-order component.

Ant gives us three props that we can supply to each Form.Item component to display validation messages or icons:

  1. validateStatus — This determines the colour & icon scheme of the validation message (see photo above) — valid options are success, warning, error, and validating.
  2. help — The validation message to display.
  3. hasFeedback — This is one of them props that don’t require a value. Just include if you want to display the associated icon, and it defaults to true.
Prettiest validations that I’ve ever seen.

Here’s an example of a simple form element that displays a validation message:

Notice that I used the long-form Form.Item component name. You can make yourself a shortcut for this and any other Ant sub-components as follows:

const FormItem = Form.Item;// .. allows you to use:
<FormItem />

Form Validation using the Ant Higher-Order Component

Now what if we do want to make use of the Ant Form decorator? It’s fairly straightforward to implement. Create your React component class, and then pass it as an argument to Form.create(). The component can then be exported:

class SomeComponent extends React.Component
render() <place_form_here.. />
FancyFormComponent = Form.create()(SomeComponent);export FancyFormComponent as default ; // imported as SomeComponent

Inside your form, decorate your Input fields using the getFieldDecorater method, which exposes a ton of extra props on your component. You can now manipulate form elements directly from the props (eek!).

This example in the documentation gives a thorough demonstration on using the complete higher-order component.

Interactive Components — Message (Alert)

Ant provides a number of other components that give web applications a high degree of interactivity. A great example is alerts — or messages, as they’re called in Ant. Adding an alert is as simple as calling message.success('Great! Item has been saved.') in your component. Message types include success, warning, or error. Just don’t forget to import message (lowercase) from ‘antd’.

Minimalism at its Best

Installing Ant.Design

As I mentioned above, you can either go all-in on the Ant ecosystem (with its custom Webpack adapter), or just opt for the design framework. I went with the latter and I suspect you might too, not the least because using other parts of the ecosystem could require a working knowledge of Chinese. But I’ll cover both options.

Option 1 — Use the CLI

Ant comes with antd-init, a CLI for generating a complete React application with Ant installed. I do not recommend this route for non-Chinese speakers, but if you want to try it, getting started is easy. Just install the CLI using npm, create a new folder, and run antd-init:

npm install antd-init -g; mkdir demo-app; cd $_; antd-init;

You will then be greeted by the following message:

antd-init@2 is only for experience antd. If you want to create projects, it’s better to init with dva-cli. dva is a redux and react based application framework. elm concept, support side effects, hmr, dynamic load and so on.`

It’s a rabbit’s hole. Open your new application and you will see that your familiar webpack.config.js file is no longer familiar — the CLI uses ant-tool, a “Build Tool Based on Webpack” that I mentioned above. The documentation is in Chinese, but it appears to set common defaults for Webpack and then allow you to just supply values that you want to override. Here’s what the config file looks like:

// Learn more on how to config.
// — https://github.com/ant-tool/atool-build#配置扩展module.exports = function(webpackConfig)
webpackConfig.babel.plugins.push(‘transform-runtime’);
webpackConfig.babel.plugins.push([‘import’,
libraryName: ‘antd’,
style: ‘css’,
]); return webpackConfig;
;

The index.js contains a lovely demo page that uses the understated Ant styling.

Option 2 — Use Standard Webpack

This would be my preferred route, but it can be more complicated getting your Webpack settings right at first. The Getting Started page includes some good instructions. First install Ant in your React app:

$ npm install antd --save

Ant recommends using their own babel-plugin-import in your .babelrc:

"presets": [
"react",
...
],
"plugins": ["transform-decorators-legacy", ..., ["import", [ libraryName: "antd", style: "css" ]]
]
}

Make sure your Webpack includes loaders for .js and .css files, and you should be good to go. To use an Ant component, import it in the module file. E.g.

import Row, Col, Icon, Button from 'antd';

Conclusion

There’s no doubt that Ant has a lot to offer as a UI framework, with a formidable catalogue of components and a serious ecosystem around it. It does, however, come with some risk. If you experience an issue with the library, you may be stuck communicating in Chinese. Ultimately I recommend trying it out if you like the minimalist aesthetic, while keeping usage of the peripheral Ant ecosystem to a minimum.

Latest Dental Implants To Be Used

Sketch was made for screen-based design.
Websites, app interfaces, icons… these objects of design exist within a world of pixel measurements, RGB colors, and presentation on digital screens. Unlike many of the Adobe creative tools which include 10,000 features and the kitchen sink, Sketch is laser-focused in its purpose—and consequently works far better (and more efficiently) for what it does do.

Sketch was not made for print-based design.
Business cards, brochures, posters… these exist within a physical world of inch/centimeter/point/pica measurements, CMYK or Pantone colors, and presentation on a variety of papers and materials. Adobe Illustrator and InDesign are two of the most popular tools in this arena.

If you’re like me, you’re far more efficient working in Sketch.

And when a print design project rolls around, you might find yourself yearning to continue using the same tool you’ve become so adept at using for web/UI design. I want you to know that it’s possible. Here’s how I do it:

(full disclosure: Adobe Illustrator is required)

The Magic Number 72

Dating back to the craft of setting lead type for a printing press, the primary units of measurement were points (72 per inch) and picas (12 per inch). Lead type (pictured here) is measured in points, and is produced in pica or half-pica increments such as 12, 18, 24, 36, and 72 points. Those numbers should sound familiar to you, as they became standard digital font sizes with the Macintosh. The first Macs used screens where every inch contained 72 pixels, resulting in 12pt text that looked practically the same size onscreen as in print. The evolution of pixels per inch (PPI) is too extensive for this article (especially since the advent of retina displays), although it’s important to know a bit about the origins of this 72:1 ratio.

This article will mostly use inch measurements, as used for print design in the US. If you are familiar with a centimeter workflow, I’d love to hear from you!

Sketch measures everything in pixel units, so we need a way to convert our design to the physical world of inches. By now you may have guessed where this is going: 72 pixels in Sketch converts to 1 inch in an exported PDF.

  • An 8.5" × 11" piece of paper (US Letter) converts to a 612px × 792px artboard.
  • A typical 3.5" × 2" business card converts to a 252px × 144px artboard.
  • When adding a new artboard, Sketch 3 gives you a few “Paper Sizes” presets. Speed things up by adding your own custom artboard presets!

The pixel dimensions of a 72 PPI layout may be far smaller than you are used to when working on websites or user interfaces. Remember that the clarity of your print project is dictated by the print method you use—Sketch’s “Show Pixels” function is of no use here!

Tips for Designing Your Layout

  • For elements in your design, try to use measurements that make sense in inches. 1px = 1pt for lines and font-sizes. I’ll often use 1/8 inch (9px) or 1/16 inch (4.5px) increments for layout elements.
  • You can use Sketch’s Grid feature to make these inch-appropriate positions or measurements easier. I suggest a grid with a 9px (1/8 inch) block size and thick lines every 8 blocks (1 inch). Show/hide the grid with ⌃G on your keyboard.
  • You can turn off “Pixel Fitting” in Preferences. There’s no need to be a stickler for pixel alignment as you would be for screen-based design.

Margins & Bleeds

Professional print shops often require your artwork to have extra space on all sides, extending any parts of your design that “bleed” out to the edge (see example below). This compensates for the slight, yet inevitable, variance in where the edges are cut on your final print. My printer asks for a 1/8 inch bleed, and I often add this to my Sketch layout (9px extra on all sides). If your design has elements that bleed, I suggest you do the same—if not, you can easily add these extra margins later when saving a PDF from Illustrator. Printers will also recommend that any text is at least 1/8 inch inside the trim lines (a “safe zone” or “critical print area”), as in the business card below.

The “Trim Lines” indicate what the final card will look like. Because trimming is rarely 100% accurate, any parts of the design that extend to the very edge should continue out to a “Bleed”. Shown here, the bleed extends to 1/8 inch outside the artwork.

Preparing the File for Print

99% of print shops are strict about the specifications of your “artwork” files. The following process will help you give printers the files they want! If your layout relies heavily on images, gradients, or shadows, skip to the next section!

When you have finished your design in Sketch, export it as a PDF at 1x scale. Many programs, such as Preview or Adobe Illustrator will automatically interpret the file at 72 PPI. You can view the PDF’s dimensions in inches in Preview (Tools > Show Inspector, ⌘I), or in pixels using Finder’s Get Info window (under “More Info”). If you save your PDF through Illustrator, pixel and inch dimensions will be automatically included in the file.

There are 2 other things we need to change about Sketch’s exported PDF:

  1. Text needs to be “Converted to Outlines”.
  2. The colors need to be CMYK values instead of RGB.
  3. Any images in the design need to be embeded as CMYK images.

Converting Text to Outlines

To ensure that your design is printed exactly how you see it on your computer, it is important to convert the text objects in the PDF to actual vector shapes, or “outlines”. This makes the text look exactly the same on any program on any computer, regardless of the fonts you’ve used in the design, and regardless of whether or not those fonts are installed on the printer’s computer.

You can convert text to outlines in Sketch (more about that here), although if your design has more than a few lines of text, Sketch will slow down dramatically. If you want a guaranteed way to crash Sketch, try selecting a dozen text objects and converting them to outlines all at once! Fortunately, Adobe Illustrator excels in this department, so we’ll use that instead.

  • Open the PDF in Illustrator and navigate to Select > All (⌘A), from the menu bar.
  • Also in the menu bar, navigate to Type > Convert Text to Outlines (⌘⇧O). Easy as that!

Converting to CMYK Colors

After opening your PDF in Illustrator, navigate to File > Document Color Mode > CMYK Color. This converts the entire document to a CMYK colorspace from RGB. That’s the easy step. Now we have to change the colors in our design to actual CMYK values.

If you’re used to screen-based design and appreciate great colors, I feel obligated to tell you that CMYK may disappoint you. Due to the nature of combining those 4 colors (cyan, magenta, yellow, and black) in ink, many bright and saturated colors are difficult or impossible to recreate. Without diving into color theory or the pros/cons of various print methods, I will simply suggest that for any color that is important to your design you see a sample of that exact color value from a similar printer on a similar material. To do this I recommend choosing a close match on a Pantone swatchbook (a bit pricey, but a great investment), or ask your printer for a printed sample of a variety of colors printed on the paper you’ll use (they probably already have these, and can give you each color’s CMYK value).

Once you’ve chosen great CMYK values for all your colors, it’s time to replace the color value for each of the elements in your design. This sounds tedious—and to a certain extent it is—but I’ve discovered a few shortcuts to help you!

  • First off, you will need to select the elements whose colors you want to change. If you aren’t familiar with Illustrator, know that a layer is only selected when you click the small circle to the right of it. Simply clicking on the layer’s name will not do anything!
  • If your design has many elements with the same color (say, all green text), they can be selected all at once by first selecting one instance of the element then clicking the “Select Similar Objects” button on the right of the toolbar. If this toolbar or button isn’t available, try navigating to Select > Same in the menu bar.
  • When your elements are selected, hold down the Shift key when you click on the fill color in the toolbar (fill color to the left, stroke/border color to the right). Even elements that are pure black need to be converted to CMYK black, for which there is a little swatch below the color sliders.

Last Step!

When all of your text has been converted to outlines and all of your colors are CMYK, it’s time to save a separate PDF (I add “-print” as a suffix to the new filename). By using File > Save As, you get a trillion options for the PDF. The single option I ever use is to add a bleed margin (my printer likes 1/8 inch) on all sides of the artwork. To do this, go to the “Marks and Bleeds” section on the left and uncheck “Use Document Bleed Settings”, as shown below.

You’re all done! Trust me, next time this process will take you half as long!

Is Your Design Image-Heavy?

If your Sketch design includes bitmap images (non-vector images), they will be automatically converted from RGB to CMYK when you change the Document Color Mode. Upon importing the PDF to Illustrator, any shadows in your design will be converted to bitmap images and any gradients will become un-editable “Non-Native Art”. Because of this, if images, shadows, or gradients are important to your design, I strongly suggest you instead save the entire Sketch layout as a PNG and convert it to a CMYK file in Photoshop using the following steps.

  1. Export the Sketch artboard as a PNG at 4.166x scale, which gives you the amount of pixels you’ll need for a 300 PPI print-ready file. Printers rarely accept bitmap images less than this resolution. Make sure your artboard includes the necessary bleed margins (described above) before export.
  2. Open the PNG in Photoshop and navigate to Image > Image Size, in the menu bar. Uncheck the “Resample” checkbox and type in either the artwork’s dimensions in inches or the “Pixels/Inch” you used when exporting from Sketch (again, this is often 300 PPI). Click “OK”.
  3. In the menu bar, navigate to Image > Mode > CMYK Color. This will alert you that Photoshop is converting the file to a default CMYK color profile. This step may visibly change the colors of your design. Rest assured that your computer screen is not an accurate representation of colors in print, although you should also not expect the same bright or saturated colors capable with RGB (as described above).
  4. Adjust the colors slightly if you desire, then Save As a .psd or .tif file. Be sure to tell the printer what bleed margins you included in the artwork!

Of course you can use this process in conjunction with the PDF + Illustrator workflow above, by embedding the Photoshopped images into your Illustrator document. But most of the time I stick to one process or the other.

Is This Workflow Right for You?

If you’re fast at designing in Sketch, feel more at ease or more creative using it, or aren’t very familiar with Illustrator/InDesign, this may be good for you. This may also be a useful workflow if you have existing designs from Sketch (an interface, icon, logo) that you want to prepare for professional printing. I can’t read the future, but with Bohemian Coding’s small team and success focusing on screen-based design, I don’t advise you to hold your breath for print features. It’s a huge can of worms!

Examples of projects made with this workflow. From packaging, to letterpressed business cards, to laser-engraved signage. This work for Juice Shop recently won the Type Directors Club’s prestigious annual design competition.

I’ve written this article to share my workflow for print design projects, but also to learn of ways that I might improve this workflow in the future. If you have any suggestions, especially related to Illustrator or the print process, feel free to share them!

Be the first to know when I publish new design articles and resources.
 
I just released Sketch Master — online training courses for professionals learning Sketch. You’ll learn tons of tricks and practical workflows, by designing real-world UI/UX and app icon projects.

Sketch Master
Sketch Master is a collection of video training courses for professionals learning Sketch—the popular design tool. sketchmaster.com web design terms

CHRIS: Welcome! My name is Chris and I'm a designer on the Google Web Designer team Today I'll walk through a new dynamictemplate with an emphasis on text We'll cover customizations includingconfigurable panels selecting nested elements, dynamic text fitting, editing groups and a demonstration of the template when uploaded into Display &Video 360 Ad Canvas Let's get started First let's navigate to the templatelibrary You'll find the template under the thumbnail Data Driven for Display & Video 360 Notice we have three new template layouts to choose from Blank Slate, Cue Cards and Panorama but today we'll be focusing on cue cards Let's create a template using cue cards I'm going to give the file a quick name andclick Create Now before we proceed in Google Web Designer let's take a quicklook at a design schematic of cue cards So cue cards is a template that utilizeselements and assets such as a logo, a background image, a swipe gallery a swipe gallery navigation, an animated arrow icon and three dynamic text groupslabelled SlideA through SlideC You also notice a few tap areas utilized for dynamic exits OK jumping back into Google Web Designer Let's review a fewimportant panels for customizing and configuring cue cards the template In the timeline you'll notice we have a lock icon Let's click the lock icon to unlock and edit the layer Let's select componentswipe-vertical Next navigate to the Properties panel The Properties panel iswhere we can configure the elements attributes style, position and size, andalso edit the component properties You'll find this component is driventhrough the use of groups SlideA, SlideB, and SlideC Now let's move to the Library panel We'll find the individual group definitions and group contents in the Library We can right click a group nameclick Edit and edit the contents of the group Protip: to quickly inspect theelements inside this group We'll use the Outliner The Outliner is a really coolnew tool for enabling us to view nested elements inside the group versus clicking through your divisions you can rapidly find which element you would like to target and edit You'll also notice in this creative we have twodivisions: wrap-SlideA txt-wrap-SlideA These are dynamic text divisions thathave a little bit of CSS logic that helped to auto center them depending upon what type of information comes down through the feed Now let's click on txt-description-SlideA in the Outliner You'll also notice there's a T icon next tothe txt-description-SlideA This signifies that it's a text element With the text element selected We will come up to the panel at the top named Text In the text panel you'll be able to configure text fitting of dynamic text and also the styling of the text in your document We can set a maximum size andalso a minimum size and when the dynamic text is passed to the division it will display the rendered fitted text size Now let's navigate back to the root ofthe document you'll notice we have breadcrumbs in the bottom left-hand corner of the stage right above the timeline Let's click Div to jump back tothe root of our document Now two more notable panels are the Events panel and the Dynamic panel In the Events panels we have events thatare specific to the control over the animated arrow icons behavior during autoplay and also during user gesture Next to the Events panel we have theDynamic tab These are the dynamic bindings that enable this document to bebound dynamically including assets, text, styling, and click exits You'll also notice Brand Awareness ishighlighted Brand Awareness is the schema we are going to be utilizing inside of Display & Video 360 Ad Canvas click OK to exit the dialog As an added bonus I would like to demonstrate the power of this creative If I jump over to a mock from a visual designer this is technically the specthe designer would like me to build to This creative is dynamic so the textcould technically be interchanged Let's fast forward to what the creative canlook like if I built it using Google Web Designers Cue Cards template You'll notice as I refresh this page the creative auto animates The arrow tries to grab the users' attention by animating and jumping The creative also has anavigation on the right hand side where we can drive the creative Users can also use gesture to scroll through the creative upon user interaction Let's say I wanted to publish this creative and upload it into Display & Video 360Ad Canvas So you might have a question what is the Ad canvas The Ad Canvas isa visual editor you can use to build and edit creatives in real time The Ad Canvas only supports our Google Web Designer data driven templates and also custom variations So in DV360 my template is loaded in the center and on the right hand side I have a UI that is editable on-the-fly You'll notice textfitting is working Variations and iterations can be knockedout proofed and signed off in a matter of minutes now with Google Web Designer'snew data driven templates in the Ad Canvas The new dynamic workflow hasnever been easier if you would like to learn more about Ad Canvas please look in the details section of this video for a Display & Video 360 Ad Canvascomprehensive demonstration link This wraps up our video Please have funcreating new dynamic ads Thank you from the team at Google Web Designer.