Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Wed, 20 Nov 2024 04:34:44 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2024, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry]]> https://smashingmagazine.com/2024/11/ai-transformative-impact-web-design-supercharging-productivity/ https://smashingmagazine.com/2024/11/ai-transformative-impact-web-design-supercharging-productivity/ Tue, 19 Nov 2024 09:00:00 GMT As I sit down to write this article, I can’t help but marvel at the incredible changes sweeping through our industry. For the first time in my career, I feel like we’re no longer limited by our technology but by our imagination. Well, almost. Artificial Intelligence (AI) is reshaping the web design landscape in ways we could only dream of a few years ago, and I’m excited to share with you how it’s supercharging our productivity across various aspects of our work.

AI Is A Partner, Not A Competitor

Now, before we dive in, let me assure you: AI isn’t here to replace us. At least, not for a while yet. There are still crucial areas where human expertise reigns supreme. AI struggles with strategic planning that considers the nuances of human behavior and market trends. It can’t match our emotional intelligence when navigating client relationships and team dynamics. And when it comes to creative thinking that pushes boundaries and creates truly innovative solutions, we humans still have the upper hand.

But here’s the exciting part: AI is becoming an invaluable tool that’s supercharging our productivity. Think of it as having a highly capable intern at your disposal. As the Nielsen Norman Group aptly put it in their article “AI as an Intern,” we need to approach AI tools with the right mindset. Double-check their work, use them for initial drafts rather than final products, and provide specific instructions and relevant context.

The key to harnessing AI’s power lies in working with it collaboratively.

Don’t expect perfect results on the first try. Instead, engage in a back-and-forth conversation, refining the AI output through iteration. This process of continuous improvement is where AI truly shines, dramatically speeding up our workflow.

Let’s explore how AI is reshaping different areas of our industry, looking at some of the many tools available to us.

AI In Design: Unleashing Creativity And Efficiency

In the realm of design, AI is proving to be a game-changer. It’s particularly adept at:

  • Collaborating on layout;
  • Generating the perfect image;
  • Refining and adapting existing imagery.

Take AI in Figma, for instance. It’s become my go-to for generating dummy content and tidying up my layers. The time saved on these mundane tasks allows me to focus more on the creative aspects of design.

Generative imagery tools like Krea, which uses Flux, have revolutionized how we source visuals. Gone are the days of endlessly scrolling through stock libraries. Now, we can describe exactly what we need, and AI will create it for us. This level of customization and specificity is a game changer for creating unique, on-brand visuals.

Relume is another tool I’ve found incredibly useful. It’s great for a collaborative back-and-forth over designs or for quickly putting together designs for smaller sites. The ability to iterate rapidly with AI assistance has significantly sped up my design process.

And let’s not forget about Adobe. Their upcoming tools, such as lighting matching for composite images, are set to take our design capabilities to new heights. If you want to see more of what’s on the horizon, I highly recommend checking out the latest Adobe Max presentations.

AI In Coding: Boosting Developer Productivity

The impact of AI on coding is nothing short of revolutionary. According to a McKinsey study, developers using AI tools performed coding tasks like code generation, refactoring, and documentation 20%-50% faster on average compared to those not using AI tools. That’s a significant productivity boost!

AI is helping developers in several key areas:

  • Coding faster;
  • Debugging more efficiently;
  • Automatically generating code comments;
  • Writing basic code.

To put this into perspective, AI now writes 25% of the code at Google. That’s a staggering figure from one of the world’s leading tech companies.

Tools like Cursor AI and GitHub Copilot are at the forefront of this revolution, offering features such as:

  • AI pair programming with predictive code edits;
  • Natural language code editing;
  • Chat interfaces for asking questions and getting help.

I’ve personally been amazed by what ChatGPT 01-Preview can do. I’ve used it to build simple WordPress plugins in minutes, tasks that would have taken me significantly longer in the past.

AI In UX: Enhancing User Research And Analysis

In the field of User Experience (UX), AI is opening up new possibilities for research and analysis. It’s allowing us to:

  • Conduct user research at scale;
  • Analyze open-ended qualitative feedback;
  • Query analytic data using natural language;
  • Predict user behavior.

I’ve found ChatGPT to be an invaluable tool for data analysis, particularly when it comes to making sense of analytics and survey responses. It can quickly identify patterns and insights that might take us hours to uncover manually.

Tools like Strella are pushing the boundaries of what’s possible in user research. AI can now perform user interviews at scale, allowing us to gather insights from a much larger pool of users than ever before.

Attention Insight is another fascinating tool. It uses AI to predict where people will look on a page, providing valuable data for optimizing layouts and user interfaces.

Microsoft Clarity has also upped its game, allowing us to ask questions about our analytic insights, heatmaps, and session recordings. This natural language interface makes it easier than ever to extract meaningful insights from our data.

AI In Copywriting: Elevating Content Quality And Efficiency

When it comes to copywriting, AI is proving to be a powerful ally. It’s helping us:

  • Transform poor-quality stakeholder content into web-optimized copy;
  • Brainstorm value propositions and create high-converting copy;
  • Craft compelling business cases and strategy documentation;
  • Write standard operating procedures.

Notion AI has become one of my go-to tools for content creation. It combines the best of ChatGPT and Claude, allowing it to draw upon a wide range of provided source material to write detailed documentation, articles, and reports.

ChatGPT has been a game-changer for improving the quality of website copy. It can take user questions and bullet-point answers from subject specialists and transform them into web-optimized content. I’ve found it particularly useful for defining value propositions and writing high-converting copy.

For refining existing content, the Hemingway Editor is invaluable. It can take the existing copy and make it clearer, more concise, and easier to scan — essential qualities for effective web content.

AI In Administration: Streamlining Mundane Tasks

AI isn’t just transforming the technical aspects of our work; it’s also making a significant impact on those mundane administrative tasks that often eat up our time. By leveraging AI, we can streamline various aspects of our daily workflow, allowing us to focus more on high-value activities.

Here are some ways AI is helping us tackle administrative tasks more efficiently:

  • Getting our thoughts down faster;
  • Writing better emails;
  • Summarizing long emails or Slack threads;
  • Speeding up research;
  • Backing up arguments with relevant data.

Let me share some personal experiences with AI tools that have transformed my administrative workflow:

I’ve dramatically increased my writing speed using Flow. This tool has taken me from typing at 49 words per minute to dictating at over 95 words per minute. Not only does it allow me to dictate, but it also tidies up my words to ensure they read well and are grammatically correct.

For email writing, I’ve found Fixkey to be invaluable. It helps me rewrite or reformat copy quickly. I’ve even set up a prompt that tones down my emails when I’m feeling particularly irritable, ensuring they sound reasonable and professional.

Dealing with long email threads or Slack conversations can be time-consuming. That’s where Spark comes in handy. It summarizes lengthy threads, saving me valuable time. Interestingly, this feature is now built into iOS for all notifications, making it even more accessible.

When it comes to research, Google Notebook LLM has been a game-changer. I can feed it large amounts of data on a subject and quickly extract valuable insights. This tool has significantly sped up my research process.

Lastly, if I need to back up an argument with a statistic or quote, Perplexity has become my go-to resource. It quickly finds relevant sources I can quote, adding credibility to my work without the need for extensive manual searches.

Conclusion: Embracing AI For A More Productive Future

As I wrap up this article, I realize I’ve only scratched the surface of what AI can do for our industry. The real challenge isn’t in the technology itself but in breaking out of our established routines and taking the time to experiment with what’s possible.

I believe we need to cultivate a new habit: pausing before each new task to consider how AI could help us approach it differently. The winners in this new reality will be those who can best integrate this technology into their workflow.

AI isn’t replacing us — it’s empowering us to work smarter and more efficiently. By embracing these tools and learning to collaborate effectively with AI, we can focus more on the aspects of our work that truly require human creativity, empathy, and strategic thinking.

If you’re feeling overwhelmed by the rapid pace of change, don’t worry. We’re all learning and adapting together. Remember, the goal isn’t to become an AI expert overnight but to gradually incorporate these tools into your work in ways that make sense for you and your projects.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Open-Source Meets Design Tooling With Penpot]]> https://smashingmagazine.com/2024/11/open-source-meets-design-tooling-penpot/ https://smashingmagazine.com/2024/11/open-source-meets-design-tooling-penpot/ Thu, 14 Nov 2024 10:00:00 GMT This article is a sponsored by Penpot

Penpot is a free, open-source design tool that allows true collaboration between designers and developers. Designers can create interactive prototypes and design systems at scale, while developers enjoy ready-to-use code and make their workflow easy and fast because it's built with web technologies, works in the browser, and has already passed 33K starts on GitHub.

The UI feels intuitive and makes it easy to get things done, even for someone who’s not a designer (guilty as charged!). You can get things done in the same way and with the same quality as with other more popular and closed-source tools like Figma.

Why Open-Source Is Important

As someone who works with commercial open-source on my day-to-day, I strongly believe in it as a way to be closer to your users and unlock the next level of delivery. Being open-source creates a whole new level of accountability and flexibility for a tool.

Developers are a different breed of user. When we hit a quirk or a gap in the UX, our first instinct is to play detective and figure out why that pattern stuck out as a sore thumb to what we’ve been doing. When the code is open-source, it’s not unusual for us to jump into the source and create an issue with a proposal on how to solve it already. At least, that’s the dream.

On top of that, being open-source allows you and your team to self-host, giving you that extra layer of privacy and control, or at least a more cost-effective solution if you have the time and skills to DYI it all.

When the cards are played right, and the team is able to afford the long-term benefits, commercial open-source is a win-win strategy.

Introducing: Penpot Plugin System

Talking about the extensibility of open-source, Penpot has the PenpotHub the home for open-source templates and the newly released plugin gallery. So now, if there’s a functionality missing, you don’t need to jump into the code-base straightaway — you can create a plugin to achieve what you need. And you can even serve it from localhost!

Creating Penpot Plugins

When it comes to the plugins, creating one is extremely ergonomic. First, there are already set templates for a few frameworks, and I created one for SolidJS in this PR — the power of open-source!

When using Vite, plugins are Single-Page Applications; if you have ever built a Hello World app with Vite, you have what it takes to create a plugin. On top of that, the Penpot team has a few packages that can give you a headstart in the process:

npm install @penpot/plugin-styles

That will allow you to import with a CSS loader or a CSS import from @penpot/plugin-styles/styles.css. The JavaScript API is available through the window object, but if your plugin is in TypeScript, you need to teach it:

npm add -D @penpot/plugin-types

With those types in your node_modules, you can pop-up the tsconfig.json and add the types to the compilerOptions.

{
  "compilerOptions": {
    "types": ["@penpot/plugin-types"]
  }
}

And there you are, now, the Language Service Provider in your editor and the TypeScript Compiler will accept that penpot is a valid namespace, and you’ll have auto-completion for the Penpot APIs throughout your entire project. For example, defining your plugin will look like the following:

penpot.ui.open("Your Plugin Name", "", {
  width: 500,
  height: 600
})

The last step is to define a plugin manifest in a manifest.json file and make sure it’s in the outpot directory from Vite. The manifest will indicate where each asset is and what permissions your plugin requires to work:

{
  "name": "Your Plugin Name",
  "description": "A Super plugin that will win Penpot Plugin Contest",
  "code": "/plugin.js",
  "icon": "/icon.png",
  "permissions": [
    "content:read",
    "content:write",
    "library:read",
    "library:write",
    "user:read",
    "comment:read",
    "comment:write",
    "allow:downloads"
  ]
}

Once the initial setup is done, the communication between the Penpot API and the plugin interface is done with a bidirectional messaging system, not so different than what you’d do with a Web-Worker.

So, to send a message from your plugin to the Penpot API, you can do the following:

penpot.ui.sendMessage("Hello from my Plugin");

And to receive it back, you need to add an event listener to the window object (the top-level scope) of your plugin:

window.addEventListener("message", event => {
  console.log("Received from Pendpot::: ", event.data);
})

A quick performance tip: If you’re creating a more complex plugin with different views and perhaps even routes, you need to have a cleanup logic. Most frameworks provide decent ergonomics to do that; for example, React does it via their return statements.

useEffect(() => {
  function handleMessage(e) {
    console.log("Received from Pendpot::: ", event.data);
  }
  window.addEventListener('message', handleMessage);

  return () => window.removeEventListener('message', handleMessage);
}, []);

And Solid has onMount and onCleanup helpers for it:

onMount(() => {
  function handleMessage(e) {
    console.log("Received from Penpot::: ", event.data);
  }
  window.addEventListener('message', handleMessage);
})

onCleanup(() => {
  window.removeEventListener('message', handleMessage);
})

Or with the @solid-primitive/event-listener helper library, so it will be automatically disposed:

import { makeEventListener } from "@solid-primitives/event-listener";

function Component() {

  const clear = makeEventListener(window, "message", handleMessage);

  // ...
  return (<span>Hello!</span>)
}

In the official documentation, there’s a step-by-step guide that will walk you through the process of creating, testing, and publishing your plugin. It will even help you out.

So, what are you waiting for?

Plugin Contest: Imagine, Build, Win

Well, maybe you’re waiting for a push of motivation. The Penpot team thought of that, which is why they’re starting a Plugin Contest!

For this contest, they want a fully functional plugin; it must be open-source and include comprehensive documentation. Detailing its features, installation, and usage. The first prize is US$ 1000, and the criteria are innovation, functionality, usability, performance, and code quality. The contest will run from November 15th to December 15th.

Final Thoughts

If you decide to build a plugin, I’d love to know what you’re building and what stack you chose. Please let me know in the comments below or on BlueSky!

]]>
hello@smashingmagazine.com (Atila Fassina)
<![CDATA[Bundle Up And Save On Smashing Books And Workshops]]> https://smashingmagazine.com/2024/11/bundle-up-and-save/ https://smashingmagazine.com/2024/11/bundle-up-and-save/ Tue, 12 Nov 2024 15:00:00 GMT Earlier this month, we wrapped up our SmashingConfs 2024, and one subject that kept coming up over and over again — among attendees, between speakers, and even within the staff — was “what would we like to get better at next year?

It seems like everyone has a different answer, but one thing is for sure: Smashing can help you with that. 😉

Save 30% on Books and Workshops (3+ items)

Good news for people who love a good value for a friendly price: throughout the entire month of November, you can save 30% off three or more books and eBooks, 30% off three or more workshops, or get the entire Smashing eBook library for $75. This is a perfect time to set yourself up for a year of learning in 2025.

Build Your Own Bundle!

Our hardcover books are our editorial flagships. They deliver in-depth knowledge and expertise shared by experts from the industry. No fluff or theory, only practical tips and insights you can apply to your work right away.

No matter if you want to complete your existing Smashing book collection or start one from scratch, you can now pick and mix your favorite books and eBooks from our Smashing Books line-up to create a bundle tailored to your needs. Or take a look at our bundle recommendations below for some inspiration.

Save 30% with 3 or more books. No ifs or buts!

Throughout the month of November, the 30% book discount will automatically apply to your cart once you add three or more items. Please note that the discount can’t be combined with other discounts.

Once you’ve decided which books should go into your bundle, we’ll pack everything up safely and send the parcel right to your doorstep from our warehouse in Germany. Shipping is free worldwide. Check delivery times.

Book Bundle Recommendations

We know decisions can be hard, that’s why we collected some ideas for themed book bundles to make the choice at least a bit easier. These bundles are perfect if you want to enhance your skills in one particular field.

You can click on each book title to jump to the details and add the book to your cart. The discount will be applied during checkout.

Front-End Bundle


Save 30% on the complete bundle.
Regular price (hardcover + PDF, ePUB, Kindle versions): $146.25
Bundle price: $102.38

Front-End Bundle dives deep into the challenges front-end developers face in their work every day — whether it’s performance, TypeScript and Image Optimization. You’ll learn practical strategies to make your workflow more efficient, but also gain precious insights into how teams at Instagram, Shopify, Netflix, eBay, Figma, Spotify tackle common challenges and frustrations.

Design Best Practices Bundle


Save 30% on the complete bundle.
Regular price (hardcover + PDF, ePUB, Kindle versions): $146.25
Bundle price: $102.38

Design Best Practices Bundle is all about creating respectful, engaging experiences that put your users’ well-being and privacy first. The practical and tactical tips help you encourage users to act — without employing dark patterns and manipulative techniques — and grow a sustainable digital business that becomes more profitable as a result.

Interface Design Bundle


Save 30% on the complete bundle.
Regular price (hardcover + PDF, ePUB, Kindle versions): $144.00
Bundle price: $100.80

A lot of websites these days look rather generic. Our Interface Design Bundle will make you think outside the box, exploring how you can create compelling, effective user experiences, with clear intent and purpose — for both desktop and mobile.

Big Design Bundle

Save 30% on the complete bundle.
Regular price (hardcover + PDF, ePUB, Kindle versions): $292.50
Bundle price: $204.75

The Big Design Bundle shines a light on the fine little details that go into building cohesive, human-centered experiences. Covering everything from design systems to form design patterns, mobile design to UX, and data privacy to ethical design, it provides you with a solid foundation for designing products that stand out from the crowd.

Special Deal For eBook Lovers

Get all eBooks for $75. You’re more of an eBook kind of person? The bundle-up discounts do apply to eBooks, too, of course. Or take a look at our Smashing Library to save even more: It includes 68 eBooks (including all our latest releases) for $75 (PDF, ePUB, and Kindle formats). Perfect to keep all your favorite books close when you’re on the go.

Bundle Up On Online Workshops: 30% Off On 3+ Workshops

Ready to dig even deeper? Whether you want to explore design patterns for AI interfaces, learn how to plan a design system, or master the art of storytelling, our online workshops are a friendly way to boost your skills online and learn practical, actionable insights from experts, live.

To prepare for the challenges that 2025 might have in store for you, you can now bundle up 3 or more online workshops and save 30% on the total price with code BUNDLEUP. Please note that the discount can’t be combined with other discounts. Take a look at all our upcoming workshops and start your smashing learning journey this winter.

Custom Bundles To Fit Your Needs!

Do you need a custom bundle? At Smashing, we would love to know how we can help you and your team make the best of 2025. Let’s think big!

We’d be happy to craft custom bundles that work for you:

  • Group discounts on large teams for Smashing Membership,
  • Bulk discounts on books or other learning materials for education or non-profit groups,
  • Door prize packages for meetups and events,
  • ...any other possibilities for your team, your office, your career path. Let us know via contact form!

    We are always looking for new ways to support our community. Please contact us if you’d like us to craft a custom bundle for your needs, or if you have any questions at all. Let’s have a great year!

    Community Matters ❤️

    Producing a book or a workshop takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for their kind, ongoing support. 👏 Happy learning, everyone!

]]>
hello@smashingmagazine.com (The Smashing Team)
<![CDATA[Alternatives To Typical Technical Illustrations And Data Visualisations]]> https://smashingmagazine.com/2024/11/alternatives-typical-technical-illustrations-data-visualisations/ https://smashingmagazine.com/2024/11/alternatives-typical-technical-illustrations-data-visualisations/ Fri, 08 Nov 2024 09:00:00 GMT Good technical illustrations and data visualisations allow users and clients to, in a manner of speaking, take time out, ponder and look at the information in a really accessible and engaging way. It can obviously also help you communicate certain categories of information and data.

The aim of the article is to inspire and show you how, by using different technical illustrations and data visualisations, you can really engage and communicate with your users and do much more good for the surrounding content.

Below are interesting and not commonly seen examples of technical illustration and data visualisation, that show data and information. As you know, more commonly seen examples are bar graphs and pie charts, but there is so much more than that!

So, keep reading and looking at the following examples, and I will show you some really cool stuff.

Technology

Typically, technical illustration and data visualisations were done using paper, pens, pencils, compasses, and rulers. But now everything is possible — you can do analog and digital. Since the mainstream introduction of the internet, around 1997, data (text, numerical, symbol) has flourished, and it has become the current day gold currency. It is easy for anyone to learn who has the software or knows the coding language. And it is much easier to do technical illustrations and data visualisation than in previous years. But that does not always mean that what is done today is better than what was done before.

What Makes Data And Information Good

  • It must be aesthetically pleasing, interesting, and stimulating to look at.
  • It must be of value and worth the effort to read and digest the information.
  • It must be easy to understand and logical.
  • To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, as Vitaly Friedman in his article “Data Visualization and Infographics” has pointed out.
  • It must be legible and have well-named and easy-to-understand axes and labels.
  • It should help explain and show data and information in a more interesting way than if it were presented in tabular form.
  • It should help explain or unpack what is written in any surrounding text and make it come to life in an unusual and useful way.
  • It must be easy to compare and contrast the data against the other data.

The Importance Of Knowing About Different Audiences

There are some common categories of audiences:

  • Expert,
  • Intermediate,
  • Beginner,
  • Member of the public,
  • Child,
  • Teenager,
  • Middle-aged,
  • Ageing,
  • Has some prior knowledge,
  • Does not have any prior knowledge,
  • Person with some kind of disability (vision, physical, hearing, mobility).

Sara Dholakia in her article “A Guide To Getting Data Visualization Right” points out the following considerations:

  • The audience’s familiarity with the subject matter of your data;
  • How much context they already have versus what you should provide;
  • Their familiarity with various methods of data visualisation.

Just look at what we are more often than not presented with.

So, let us dive into some cool examples that you can understand and start using today that will also give your work and content a really cool edge and help it stand out and communicate better.

3D Flow Diagram

It provides a great way to show relationships and connections between items and different components, and the 3D style adds a lot to the diagram.

Card Diagram

It’s an effective way to highlight and select information or data in relation to its surrounding data and information.

Pyramid Graph

Being great at showing two categories of information and comparing them horizontally, they are an alternative to typical horizontal or vertical bar graphs.

3D Examples Of Common Graphs

They are an excellent way to enliven overused 2D pie and bar graphs. 3D examples of common graphs give a real sense of quality and depth whilst enhancing the data and information much more than 2D versions.

Sankey Flow Diagram

This diagram is a good way to show the progression and the journey of information and data and how they are connected in relation to their data value. It's not often seen, but it's really cool.

Stream Graph

A stream graph is a great way to show the data and how it relates to the other data — much more interesting than just using lines as is typically seen.

3D Map

It provides an excellent way to show a map in a different and more interesting form than the typically seen 2D version. It really gives the map a sense of environment, depth, and atmosphere.

Tree Map

It’s a great way to show the data spatially and how the data value relates, in terms of size, to the rest of the data.

Waterfall Chart

A waterfall chart is helpful in showing the data and how it relates in a vertical manner to the range of data values.

Doughnut Chart

It shows the data against the other data segments and also as a value within a range of data.

Lollipop Chart

A lollipop chart is an excellent method to demonstrate percentage values in a horizontal manner that also integrates the label and data value well.

Bubble Chart

It’s an effective way to illustrate data values in terms of size and sub-classification in relation to the surrounding data.

How To Start Doing Technical Illustration And Data Visualisation

There are many options available, including specialized software like Flourish, Tableau, and Klipfolio; familiar tools like Microsoft Word, Excel, or PowerPoint (with redrawing in software like Adobe Illustrator, Affinity Designer, or CorelDRAW); or learning coding languages such as D3, Three.js, P5.js, WebGL, or the Web Audio API, as Frederick O’Brien discusses in his article “Web Design Done Well: Delightful Data Visualization Examples.”

But there is one essential ingredient that you will always need, and that is a mind and vision for technical illustration and data visualisation.

Worthwhile Practitioners And Links To Look At

]]>
hello@smashingmagazine.com (Thomas Bohm)
<![CDATA[Why Optimizing Your Lighthouse Score Is Not Enough For A Fast Website]]> https://smashingmagazine.com/2024/11/why-optimizing-lighthouse-score-not-enough-fast-website/ https://smashingmagazine.com/2024/11/why-optimizing-lighthouse-score-not-enough-fast-website/ Tue, 05 Nov 2024 10:00:00 GMT This article is a sponsored by DebugBear

We’ve all had that moment. You’re optimizing the performance of some website, scrutinizing every millisecond it takes for the current page to load. You’ve fired up Google Lighthouse from Chrome’s DevTools because everyone and their uncle uses it to evaluate performance.

After running your 151st report and completing all of the recommended improvements, you experience nirvana: a perfect 100% performance score!

Time to pat yourself on the back for a job well done. Maybe you can use this to get that pay raise you’ve been wanting! Except, don’t — at least not using Google Lighthouse as your sole proof. I know a perfect score produces all kinds of good feelings. That’s what we’re aiming for, after all!

Google Lighthouse is merely one tool in a complete performance toolkit. What it’s not is a complete picture of how your website performs in the real world. Sure, we can glean plenty of insights about a site’s performance and even spot issues that ought to be addressed to speed things up. But again, it’s an incomplete picture.

What Google Lighthouse Is Great At

I hear other developers boasting about perfect Lighthouse scores and see the screenshots published all over socials. Hey, I just did that myself in the introduction of this article!

Lighthouse might be the most widely used web performance reporting tool. I’d wager its ubiquity is due to convenience more than the quality of its reports.

Open DevTools, click the Lighthouse tab, and generate the report! There are even many ways we can configure Lighthouse to measure performance in simulated situations, such as slow internet connection speeds or creating separate reports for mobile and desktop. It’s a very powerful tool for something that comes baked into a free browser. It’s also baked right into Google’s PageSpeed Insights tool!

And it’s fast. Run a report in Lighthouse, and you’ll get something back in about 10-15 seconds. Try running reports with other tools, and you’ll find yourself refilling your coffee, hitting the bathroom, and maybe checking your email (in varying order) while waiting for the results. There’s a good reason for that, but all I want to call out is that Google Lighthouse is lightning fast as far as performance reporting goes.

To recap: Lighthouse is great at many things!

  • It’s convenient to access,
  • It provides a good deal of configuration for different levels of troubleshooting,
  • And it spits out reports in record time.

And what about that bright and lovely animated green score — who doesn’t love that?!

OK, that’s the rosy side of Lighthouse reports. It’s only fair to highlight its limitations as well. This isn’t to dissuade you or anyone else from using Lighthouse, but more of a heads-up that your score may not perfectly reflect reality — or even match the scores you’d get in other tools, including Google’s own PageSpeed Insights.

It Doesn’t Match “Real” Users

Not all data is created equal in capital Web Performance. It’s important to know this because data represents assumptions that reporting tools make when evaluating performance metrics.

The data Lighthouse relies on for its reporting is called simulated data. You might already have a solid guess at what that means: it’s synthetic data. Now, before kicking simulated data in the knees for not being “real” data, know that it’s the reason Lighthouse is super fast.

You know how there’s a setting to “throttle” the internet connection speed? That simulates different conditions that either slow down or speed up the connection speed, something that you configure directly in Lighthouse. By default, Lighthouse collects data on a fast connection, but we can configure it to something slower to gain insights on slow page loads. But beware! Lighthouse then estimates how quickly the page would have loaded on a different connection.

DebugBear founder Matt Zeunert outlines how data runs in a simulated throttling environment, explaining how Lighthouse uses “optimistic” and “pessimistic” averages for making conclusions:

“[Simulated throttling] reduces variability between tests. But if there’s a single slow render-blocking request that shares an origin with several fast responses, then Lighthouse will underestimate page load time.

Lighthouse averages optimistic and pessimistic estimates when it’s unsure exactly which nodes block rendering. In practice, metrics may be closer to either one of these, depending on which dependency graph is more correct.”

And again, the environment is a configuration, not reality. It’s unlikely that your throttled conditions match the connection speeds of an average real user on the website, as they may have a faster network connection or run on a slower CPU. What Lighthouse provides is more like “on-demand” testing that’s immediately available.

That makes simulated data great for running tests quickly and under certain artificially sweetened conditions. However, it sacrifices accuracy by making assumptions about the connection speeds of site visitors and averages things in a way that divorces it from reality.

While simulated throttling is the default in Lighthouse, it also supports more realistic throttling methods. Running those tests will take more time but give you more accurate data. The easiest way to run Lighthouse with more realistic settings is using an online tool like the DebugBear website speed test or WebPageTest.

It Doesn’t Impact Core Web Vitals Scores

These Core Web Vitals everyone talks about are Google’s standard metrics for measuring performance. They go beyond simple “Your page loaded in X seconds” reports by looking at a slew of more pertinent details that are diagnostic of how the page loads, resources that might be blocking other resources, slow user interactions, and how much the page shifts around from loading resources and content. Zeunert has another great post here on Smashing Magazine that discusses each metric in detail.

The main point here is that the simulated data Lighthouse produces may (and often does) differ from performance metrics from other tools. I spent a good deal explaining this in another article. The gist of it is that Lighthouse scores do not impact Core Web Vitals data. The reason for that is Core Web Vitals relies on data about real users pulled from the monthly-updated Chrome User Experience (CrUX) report. While CrUX data may be limited by how recently the data was pulled, it is a more accurate reflection of user behaviors and browsing conditions than the simulated data in Lighthouse.

The ultimate point I’m getting at is that Lighthouse is simply ineffective at measuring Core Web Vitals performance metrics. Here’s how I explain it in my bespoke article:

“[Synthetic] data is fundamentally limited by the fact that it only looks at a single experience in a pre-defined environment. This environment often doesn’t even match the average real user on the website, who may have a faster network connection or a slower CPU.”

I emphasized the important part. In real life, users are likely to have more than one experience on a particular page. It’s not as though you navigate to a site, let it load, sit there, and then close the page; you’re more likely to do something on that page. And for a Core Web Vital metric that looks for slow paint in response to user input — namely, Interaction to Next Paint (INP) — there’s no way for Lighthouse to measure that at all!

It’s the same deal for a metric like Cumulative Layout Shift (CLS) that measures the “visible stability” of a page layout because layout shifts often happen lower on the page after a user has scrolled down. If Lighthouse relied on CrUX data (which it doesn’t), then it would be able to make assumptions based on real users who interact with the page and can experience CLS. Instead, Lighthouse waits patiently for the full page load and never interacts with parts of the page, thus having no way of knowing anything about CLS.

But It’s Still a “Good Start”

That’s what I want you to walk away with at the end of the day. A Lighthouse report is incredibly good at producing reports quickly, thanks to the simulated data it uses. In that sense, I’d say that Lighthouse is a handy “gut check” and maybe even a first step to identifying opportunities to optimize performance.

But a complete picture, it’s not. For that, what we’d want is a tool that leans on real user data. Tools that integrate CrUX data are pretty good there. But again, that data is pulled every month (28 days to be exact) so it may not reflect the most recent user behaviors and interactions, although it is updated daily on a rolling basis and it is indeed possible to query historical records for larger sample sizes.

Even better is using a tool that monitors users in real-time.

Data pulled directly from the site of origin is truly the gold standard data we want because it comes from the source of truth. That makes tools that integrate with your site the best way to gain insights and diagnose issues because they tell you exactly how your visitors are experiencing your site.

I’ve written about using the Performance API in JavaScript to evaluate custom and Core Web Vitals metrics, so it’s possible to roll that on your own. But there are plenty of existing services out there that do this for you, complete with visualizations, historical records, and true real-time user monitoring (often abbreviated as RUM). What services? Well, DebugBear is a great place to start. I cited Matt Zeunert earlier, and DebugBear is his product.

So, if what you want is a complete picture of your site’s performance, go ahead and start with Lighthouse. But don’t stop there because you’re only seeing part of the picture. You’ll want to augment your findings and diagnose performance with real-user monitoring for the most complete, accurate picture.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Ingredients For A Cozy November (2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/10/desktop-wallpaper-calendars-november-2024/ https://smashingmagazine.com/2024/10/desktop-wallpaper-calendars-november-2024/ Thu, 31 Oct 2024 14:00:00 GMT When the days are gray and misty as they are in many parts of the world in November, there’s no better remedy than a bit of colorful inspiration. To bring some good vibes to your desktops this month, artists and designers from across the globe once again tickled their creative ideas and designed unique and inspiring wallpapers that are bound to sweeten up your November.

The wallpapers in this post come in versions with and without a calendar for November 2024 and can be downloaded for free — as it has been a monthly tradition here at Smashing Magazine for more than 13 years already. As a little bonus goodie, we also added a selection of November favorites from our wallpapers archives to the post. Maybe you’ll spot one of your almost-forgotten favorites in here, too?

A huge thank-you to everyone who shared their wallpapers with us this month — this post wouldn’t exist without you. Happy November!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit your wallpaper design! 👩‍🎨
    Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. We can’t wait to see what you’ll come up with!

Honoring the Sound of Jazz And Soul

“Today, we celebrate the saxophone, an instrument that has added its signature sound to jazz, rock, classical, and so much more. From smooth solos to bold brass harmonies, the sax has shaped the soundtrack of our lives. Whether you’re a seasoned player or just love its iconic sound, let’s show some love to this musical marvel!” — Designed by PopArt Studio from Serbia.

Snow Falls In The Alps

“The end of the year is approaching, and winter is just around the corner, so we’ll spend this month in the Alps. The snow has arrived in the mountains, and we can take advantage of it to ski or have a hot coffee while we watch the flakes fall.” — Designed by Veronica Valenzuela from Spain.

The Secret Cave

Designed by Ricardo Gimenes from Spain.

Happy Thanksgiving Day

Designed by Cronix from the United States.

Space Explorer

“A peaceful, minimalist wallpaper of a lone cartoon astronaut floating in space surrounded by planets and stars.” — Designed by Reethu from London.

The Sailor

Designed by Ricardo Gimenes from Spain.

Happy Diwali

Designed by Cronix from the United States.

Elimination Of Violence Against Women

“November 25th is the “International Day for the Elimination of Violence Against Women.” We wanted to create a wallpaper that can hopefully contribute to building awareness and support.” — Designed by Friendlystock from Greece.

Square Isn’t It

“When playing with lines, which were at the beginning displaying a square, I finally arrived to this drawing, and I was surprised. I thought it would make a nice wallpaper for one’s desktop, doesn’t it?” — Designed by Philippe Brouard from France.

Transition

“Inspired by the transition from autumn to winter.” — Designed by Tecxology from India.

Sunset Or Sunrise

“November is autumn in all its splendor. Earthy colors, falling leaves, and afternoons in the warmth of the home. But it is also adventurous and exciting and why not, different. We sit in Bali contemplating Pura Ulun Danu Bratan. We don’t know if it’s sunset or dusk, but… does that really matter?” — Designed by Veronica Valenzuela Jimenez from Spain.

Cozy Autumn Cups And Cute Pumpkins

“Autumn coziness, which is created by fallen leaves, pumpkins, and cups of cocoa, inspired our designers for this wallpaper. — Designed by MasterBundles from Ukraine.

A Jelly November

“Been looking for a mysterious, gloomy, yet beautiful desktop wallpaper for this winter season? We’ve got you, as this month’s calendar marks Jellyfish Day. On November 3rd, we celebrate these unique, bewildering, and stunning marine animals. Besides adorning your screen, we’ve got you covered with some jellyfish fun facts: they aren’t really fish, they need very little oxygen, eat a broad diet, and shrink in size when food is scarce. Now that’s some tenacity to look up to.” — Designed by PopArt Studio from Serbia.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

Winter Is Here

Designed by Ricardo Gimenes from Spain.

Moonlight Bats

“I designed some Halloween characters and then this idea came to my mind — a bat family hanging around in the moonlight. A cute and scary mood is just perfect for autumn.” — Designed by Carmen Eisendle from Germany.

Time To Give Thanks

Designed by Glynnis Owen from Australia.

The Kind Soul

“Kindness drives humanity. Be kind. Be humble. Be humane. Be the best of yourself!” — Designed by Color Mean Creative Studio from Dubai.

Anbani

Anbani means alphabet in Georgian. The letters that grow on that tree are the Georgian alphabet. It’s very unique!” — Designed by Vlad Gerasimov from Georgia.

Tempestuous November

“By the end of autumn, ferocious Poseidon will part from tinted clouds and timid breeze. After this uneven clash, the sky once more becomes pellucid just in time for imminent luminous snow.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Me And The Key Three

Designed by Bart Bonte from Belgium.

Mushroom Season

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

Outer Space

“We were inspired by the nature around us and the universe above us, so we created an out-of-this-world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Captain’s Home

Designed by Elise Vanoorbeek from Belgium.

Holiday Season Is Approaching

Designed by ActiveCollab from the United States.

Deer Fall, I Love You

Designed by Maria Porter from the United States.

Peanut Butter Jelly Time

“November is the Peanut Butter Month so I decided to make a wallpaper around that. As everyone knows peanut butter goes really well with some jelly so I made two sandwiches, one with peanut butter and one with jelly. Together they make the best combination.” — Designed by Senne Mommens from Belgium.

International Civil Aviation Day

“On December 7, we mark International Civil Aviation Day, celebrating those who prove day by day that the sky really is the limit. As the engine of global connectivity, civil aviation is now, more than ever, a symbol of social and economic progress and a vehicle of international understanding. This monthly calendar is our sign of gratitude to those who dedicate their lives to enabling everyone to reach their dreams.” — Designed by PopArt Studio from Serbia.

November Nights On Mountains

“Those chill November nights when you see mountain tops covered with the first snow sparkling in the moonlight.” — Designed by Jovana Djokic from Serbia.

November Fun

Designed by Xenia Latii from Germany.

November Ingredients

“Whether or not you celebrate Thanksgiving, there’s certain things that always make the harvest season special. As a Floridian, I’m a big fan of any signs that the weather might be cooling down soon, too!” — Designed by Dorothy Timmer from the United States.

Universal Children’s Day

“Universal Children’s Day, November 20. It feels like a dream world, it invites you to let your imagination flow, see the details, and find the child inside you.” — Designed by Luis Costa from Portugal.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Designing For Gen Z: Expectations And UX Guidelines]]> https://smashingmagazine.com/2024/10/designing-for-gen-z/ https://smashingmagazine.com/2024/10/designing-for-gen-z/ Wed, 30 Oct 2024 09:00:00 GMT Every generation is different in very unique ways, with different habits, views, standards, and expectations. So when designing for Gen Z, what do we need to keep in mind? Let’s take a closer look at Gen Z, how they use tech, and why it might be a good idea to ignore common design advice and do the opposite of what is usually recommended instead.

This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Free preview.

Gen Z: Most Diverse And Most Inclusive

When we talk about Generation Z, we usually refer to people born between 1995 and 2010. Of course making universal statements about a cohort where some are adults in their late 20s and others are school students is at best ineffective and at worst wrong — yet there are some attributes that stand out compared to earlier generations.

Gen Z is the most diverse generation in terms of race, ethnicity, and identity. Research shows that young people today are caring and proactive, and far from being “slow, passive and mindless” as they are often described. In fact, they are willing to take a stand and break their habits if they deeply believe in a specific purpose and goal. Surely there are many distractions along that way, but the belief in fairness and sense of purpose has enormous value.

Their values reflect that: accessibility, inclusivity, sustainability, and work/life balance are top priorities for Gen Zs, and they value experiences, principles, and social stand over possessions.

What Gen Z Deeply Cares About

Gen Z grew up with technology, so unsurprisingly digital experiences are very familiar and understood by them. On the other hand, digital experiences are often suboptimal at best — slow, inaccessible, confusing, and frustrating. Plus, the web is filled with exaggerations and generic but fluffy statements. So it’s not a big revelation that Gen Zs are highly skeptical of brands and advertising by default (rightfully so!), and rely almost exclusively on social circles, influencers, and peers as main research channels.

They might sometimes struggle to spot what’s real and what’s not, but they are highly selective about their sources. They are always connected and used to following events live as they unfold, so unsurprisingly, Gen Z tends to have little patience.

And sure enough, Gen Z loves short-form content, but that doesn’t necessarily equate to a short attention span. Attention span is context-dependent, as documentaries and literature are among Gen Z’s favorites.

Designing For Gen Z

Most design advice on Gen Z focuses on producing “short form, snackable, bite-sized” content. That content is optimized for very short attention spans, TikTok-alike content consumption, and simplified to the core messaging. I would strongly encourage us to do the opposite.

We shouldn’t discount Gen Z as a generation with poor attention spans and urgent needs for instant gratification. Gen Zs have very strong beliefs and values, but they are also inherently curious and want to reshape the world. We can tell a damn good story. Captivate and engage. Make people think. Many Gen Zs are highly ambitious and motivated, and they want to be challenged and to succeed. So let’s support that. And to do that, we need to remain genuine and authentic.

Remain Genuine And Authentic

As Michelle Winchester noted, Gen Zs have very diverse perspectives and opinions, and they possess a discerning ability to detect disingenuous content. That’s also where mistrust towards AI comes into play, along with AI fatigue. As Nilay Patel mentioned on Ezra Klein Show, today when somebody says that something is “AI-generated”, usually it’s not a praise, but rather a testament how poor and untrustworthy it actually is.

Gen Z expects better. Hence brands that value sincerity, honesty, and authenticity are perceived as more trustworthy compared to brands that don’t have an opinion, don’t take a stand, don’t act for their beliefs and principles. For example, the “Keep Beauty Real” campaign by Dove (shown below) showcases the value of genuine human beauty, which is so often missed and so often exaggerated to extremes by AI.

Gareth Ford Williams has put together a visual language of closed captions and has kindly provided a PDF cheatsheet that is commonly used by professional captioners. There are some generally established rules about captioning, and here are some that I found quite useful when working on captioning for my own video course:

  • Divide your sentences into two relatively equal parts like a pyramid (40ch per line for the top line, a bit less for the bottom line);
  • Always keep an average of 20 to 30 characters per second;
  • A sequence should only last between 1 and 8 seconds;
  • Always keep a person’s name or title together;
  • Do not break a line after conjunction;
  • Consider aligning multi-lined captions to the left.

On YouTube, users can select a font used for subtitles and choose between monospaced and proportional serif and sans-serif, casual, cursive, and small-caps. But perhaps, in addition to stylistic details, we could provide a careful selection of fonts to help audiences with different needs. This could include a dyslexic font or a hyper-legible font, for example.

Additionally, we could display presets for various high contrast options for subtitles. This gives users a faster selection, requiring less effort to configure just the right combination of colors and transparency. Still, it would be useful to provide more sophisticated options just in case users need them.

Support Intrinsic Motivation

On the other hand, in times of instant gratification with likes, reposts, and leaderboards, people often learn that a feeling of achievement comes from extrinsic signals, like reach or attention from other people. The more important it is to support intrinsic motivation.

As Paula Gomes noted, intrinsic motivation is characterized by engaging in behaviors just for their own sake. People do something because they enjoy it. It is when they care deeply for an activity and enjoy it without needing any external rewards or pressure to do it.

Typically this requires 3 components:

  • Competence involves the need to feel capable of achieving a desired outcome.
  • Autonomy is about the need to feel in control of your own actions, behaviors, and goals.
  • Relatedness reflects the need to feel a sense of belonging and attachment to other people.

In practical terms, that means setting people up for success. Preparing the knowledge and documents and skills they need ahead of time. Building knowledge up without necessarily rewarding them with points. It also means allowing people to have a strong sense of ownership of the decisions and the work they are doing. And adding collaborative goals that would require cooperation with team members and colleagues.

Encourage Critical Thinking

The younger people are, the more difficult it is to distinguish between what’s real and what isn’t. Whenever possible, show sources or at least explain where to find specific details that back up claims that you are making. Encourage people to make up their mind, and design content to support that — with scientific papers, trustworthy reviews, vetted feedback, and diverse opinions.

And: you don’t have to shy away from technical details. Don’t make them mandatory to read and understand, but make them accessible and available in case readers or viewers are interested.

In times where there is so much fake, exaggerated, dishonest, and AI-generated content, it might be just enough to be perceived as authentic, trustworthy, and attention-worthy by the highly selective and very demanding Gen Z.

Good Design Is For Everyone

I keep repeating myself like a broken record, but better accessibility is better for everyone. As you hopefully have noticed, many attributes and expectations that we see in Gen Z are beneficial for all other generations, too. It’s just good, honest, authentic design. And that’s the very heart of good UX.

What I haven’t mentioned is that Gen Z genuinely appreciates feedback and values platforms that listen to their opinions and make changes based on their feedback. So the best thing we can do, as designers, is to actively involve Gen Z in the design process. Designing with them, rather than designing for them.

And, most importantly: with Gen Z, perhaps for the first time ever, inclusion and accessibility is becoming a default expectation for all digital products. With it comes the sense of fairness, diversity, and respect. And, personally, I strongly believe that it’s a great thing — and a testament how remarkable Gen Zs actually are.

Wrapping Up
  • Large parts of Gen Z aren’t mobile-first, but mobile-only.
  • To some, the main search engine is YouTube, not Google.
  • Some don’t know and have never heard of Internet Explorer.
  • Trust only verified customer reviews, influencers, friends.
  • Used to follow events live as they unfold → little patience.
  • Sustainability, reuse, work/life balance are top priorities.
  • Prefer social login as the fastest authentication method.
  • Typically ignore or close cookie banners, without consent.
  • Rely on social proof, honest reviews/photos, authenticity.
  • Most likely generation to provide a referral to a product.
  • Typically turn on subtitles for videos by default.
Useful Resources New: How To Measure UX And Design Impact

I’ve just launched “How To Measure UX and Design Impact” 🚀 (8h), a new practical guide for UX leads to measure UX impact on business. Use the code 🎟 IMPACT to save 20% off today. And thank you for your kind and ongoing support, everyone! Jump to details.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[CSS min() All The Things]]> https://smashingmagazine.com/2024/10/css-min-all-the-things/ https://smashingmagazine.com/2024/10/css-min-all-the-things/ Thu, 17 Oct 2024 10:00:00 GMT Did you see this post that Chris Coyier published back in August? He experimented with CSS container query units, going all in and using them for every single numeric value in a demo he put together. And the result was… not too bad, actually.

See the Pen Container Units for All Units [forked] by Chris Coyier.

What I found interesting about this is how it demonstrates the complexity of sizing things. We’re constrained to absolute and relative units in CSS, so we’re either stuck at a specific size (e.g., px) or computing the size based on sizing declared on another element (e.g., %, em, rem, vw, vh, and so on). Both come with compromises, so it’s not like there is a “correct” way to go about things — it’s about the element’s context — and leaning heavily in any one direction doesn’t remedy that.

I thought I’d try my own experiment but with the CSS min() function instead of container query units. Why? Well, first off, we can supply the function with any type of length unit we want, which makes the approach a little more flexible than working with one type of unit. But the real reason I wanted to do this is personal interest more than anything else.

The Demo

I won’t make you wait for the end to see how my min() experiment went:

Taking website responsiveness to a whole new level 🌐 pic.twitter.com/pKmHl5d0Dy

— Vayo (@vayospot) March 1, 2023


We’ll talk about that more after we walk through the details.

A Little About min()

The min() function takes two values and applies the smallest one, whichever one happens to be in the element’s context. For example, we can say we want an element to be as wide as 50% of whatever container it is in. And if 50% is greater than, say 200px, cap the width there instead.

See the Pen [forked] by Geoff Graham.

So, min() is sort of like container query units in the sense that it is aware of how much available space it has in its container. But it’s different in that min() isn’t querying its container dimensions to compute the final value. We supply it with two acceptable lengths, and it determines which is best given the context. That makes min() (and max() for that matter) a useful tool for responsive layouts that adapt to the viewport’s size. It uses conditional logic to determine the “best” match, which means it can help adapt layouts without needing to reach for CSS media queries.

.element {
  width: min(200px, 50%);
}

/* Close to this: */
.element {
  width: 200px;

  @media (min-width: 600px) {
    width: 50%;
  }
}

The difference between min() and @media in that example is that we’re telling the browser to set the element’s width to 50% at a specific breakpoint of 600px. With min(), it switches things up automatically as the amount of available space changes, whatever viewport size that happens to be.

When I use the min(), I think of it as having the ability to make smart decisions based on context. We don’t have to do the thinking or calculations to determine which value is used. However, using min() coupled with just any CSS unit isn’t enough. For instance, relative units work better for responsiveness than absolute units. You might even think of min() as setting a maximum value in that it never goes below the first value but also caps itself at the second value.

I mentioned earlier that we could use any type of unit in min(). Let’s take the same approach that Chris did and lean heavily into a type of unit to see how min() behaves when it is used exclusively for a responsive layout. Specifically, we’ll use viewport units as they are directly relative to the size of the viewport.

Now, there are different flavors of viewport units. We can use the viewport’s width (vw) and height (vh). We also have the vmin and vmax units that are slightly more intelligent in that they evaluate an element’s width and height and apply either the smaller (vmin) or larger (vmax) of the two. So, if we declare 100vmax on an element, and that element is 500px wide by 250px tall, the unit computes to 500px.

That is how I am approaching this experiment. What happens if we eschew media queries in favor of only using min() to establish a responsive layout and lean into viewport units to make it happen? We’ll take it one piece at a time.

Font Sizing

There are various approaches for responsive type. Media queries are quickly becoming the “old school” way of doing it:

p { font-size: 1.1rem; }

@media (min-width: 1200px) {
  p { font-size: 1.2rem; }
}

@media (max-width: 350px) {
  p { font-size: 0.9rem; }
}

Sure, this works, but what happens when the user uses a 4K monitor? Or a foldable phone? There are other tried and true approaches; in fact, clamp() is the prevailing go-to. But we’re leaning all-in on min(). As it happens, just one line of code is all we need to wipe out all of those media queries, substantially reducing our code:

p { font-size: min(6vmin, calc(1rem + 0.23vmax)); }

I’ll walk you through those values…

  1. 6vmin is essentially 6% of the browser’s width or height, whichever is smallest. This allows the font size to shrink as much as needed for smaller contexts.
  2. For calc(1rem + 0.23vmax), 1rem is the base font size, and 0.23vmax is a tiny fraction of the viewport‘s width or height, whichever happens to be the largest.
  3. The calc() function adds those two values together. Since 0.23vmax is evaluated differently depending on which viewport edge is the largest, it’s crucial when it comes to scaling the font size between the two arguments. I’ve tweaked it into something that scales gradually one way or the other rather than blowing things up as the viewport size increases.
  4. Finally, the min() returns the smallest value suitable for the font size of the current screen size.

And speaking of how flexible the min() approach is, it can restrict how far the text grows. For example, we can cap this at a maximum font-size equal to 2rem as a third function parameter:

p { font-size: min(6vmin, calc(1rem + 0.23vmax), 2rem); }

This isn’t a silver bullet tactic. I’d say it’s probably best used for body text, like paragraphs. We might want to adjust things a smidge for headings, e.g., <h1>:

h1 { font-size: min(7.5vmin, calc(2rem + 1.2vmax)); }

We’ve bumped up the minimum size from 6vmin to 7.5vmin so that it stays larger than the body text at any viewport size. Also, in the calc(), the base size is now 2rem, which is smaller than the default UA styles for <h1>. We’re using 1.2vmax as the multiplier this time, meaning it grows more than the body text, which is multiplied by a smaller value, .023vmax.

This works for me. You can always tweak these values and see which works best for your use. Whatever the case, the font-size for this experiment is completely fluid and completely based on the min() function, adhering to my self-imposed constraint.

Margin And Padding

Spacing is a big part of layout, responsive or not. We need margin and padding to properly situate elements alongside other elements and give them breathing room, both inside and outside their box.

We’re going all-in with min() for this, too. We could use absolute units, like pixels, but those aren’t exactly responsive.

min() can combine relative and absolute units so they are more effective. Let’s pair vmin with px this time:

div { margin: min(10vmin, 30px); }

10vmin is likely to be smaller than 30px when viewed on a small viewport. That’s why I’m allowing the margin to shrink dynamically this time around. As the viewport size increases, whereby 10vmin exceeds 30px, min() caps the value at 30px, going no higher than that.

Notice, too, that I didn’t reach for calc() this time. Margins don’t really need to grow indefinitely with screen size, as too much spacing between containers or elements generally looks awkward on larger screens. This concept also works extremely well for padding, but we don’t have to go there. Instead, it might be better to stick with a single unit, preferably em, since it is relative to the element’s font-size. We can essentially “pass” the work that min() is doing on the font-size to the margin and padding properties because of that.

.card-info {
  font-size: min(6vmin, calc(1rem + 0.12vmax));
  padding: 1.2em;
}

Now, padding scales with the font-size, which is powered by min().

Widths

Setting width for a responsive design doesn’t have to be complicated, right? We could simply use a single percentage or viewport unit value to specify how much available horizontal space we want to take up, and the element will adjust accordingly. Though, container query units could be a happy path outside of this experiment.

But we’re min() all the way!

min() comes in handy when setting constraints on how much an element responds to changes. We can set an upper limit of 650px and, if the computed width tries to go larger, have the element settle at a full width of 100%:

.container { width: min(100%, 650px); }

Things get interesting with text width. When the width of a text box is too long, it becomes uncomfortable to read through the texts. There are competing theories about how many characters per line of text is best for an optimal reading experience. For the sake of argument, let’s say that number should be between 50-75 characters. In other words, we ought to pack no more than 75 characters on a line, and we can do that with the ch unit, which is based on the 0 character’s size for whatever font is in use.

p {
  width: min(100%, 75ch);
}

This code basically says: get as wide as needed but never wider than 75 characters.

Sizing Recipes Based On min()

Over time, with a lot of tweaking and modifying of values, I have drafted a list of pre-defined values that I find work well for responsively styling different properties:

:root {
  --font-size-6x: min(7.5vmin, calc(2rem + 1.2vmax));
  --font-size-5x: min(6.5vmin, calc(1.1rem + 1.2vmax));
  --font-size-4x: min(4vmin, calc(0.8rem + 1.2vmax));
  --font-size-3x: min(6vmin, calc(1rem + 0.12vmax));
  --font-size-2x: min(4vmin, calc(0.85rem + 0.12vmax));
  --font-size-1x: min(2vmin, calc(0.65rem + 0.12vmax));
  --width-2x: min(100vw, 1300px);
  --width-1x: min(100%, 1200px);
  --gap-3x: min(5vmin, 1.5rem);
  --gap-2x: min(4.5vmin, 1rem);
  --size-10x: min(15vmin, 5.5rem);
  --size-9x: min(10vmin, 5rem);
  --size-8x: min(10vmin, 4rem);
  --size-7x: min(10vmin, 3rem);
  --size-6x: min(8.5vmin, 2.5rem);
  --size-5x: min(8vmin, 2rem);
  --size-4x: min(8vmin, 1.5rem);
  --size-3x: min(7vmin, 1rem);
  --size-2x: min(5vmin, 1rem);
  --size-1x: min(2.5vmin, 0.5rem);
}

This is how I approached my experiment because it helps me know what to reach for in a given situation:

h1 { font-size: var(--font-size-6x); }

.container {
  width: var(--width-2x);
  margin: var(--size-2x);
}

.card-grid { gap: var(--gap-3x); }

There we go! We have a heading that scales flawlessly, a container that’s responsive and never too wide, and a grid with dynamic spacing — all without a single media query. The --size- properties declared in the variable list are the most versatile, as they can be used for properties that require scaling, e.g., margins, paddings, and so on.

The Final Result, Again

I shared a video of the result, but here’s a link to the demo.

See the Pen min() website [forked] by Vayo.

So, is min() the be-all, end-all for responsiveness? Absolutely not. Neither is a diet consisting entirely of container query units. I mean, it’s cool that we can scale an entire webpage like this, but the web is never a one-size-fits-all beanie.

If anything, I think this and what Chris demoed are warnings against dogmatic approaches to web design as a whole, not solely unique to responsive design. CSS features, including length units and functions, are tools in a larger virtual toolshed. Rather than getting too cozy with one feature or technique, explore the shed because you might find a better tool for the job.

]]>
hello@smashingmagazine.com (Victor Ayomipo)
<![CDATA[It’s Here! How To Measure UX & Design Impact, With Vitaly Friedman]]> https://smashingmagazine.com/2024/10/new-video-course-how-to-measure-ux-design-impact/ https://smashingmagazine.com/2024/10/new-video-course-how-to-measure-ux-design-impact/ Tue, 15 Oct 2024 14:00:00 GMT How To Measure UX and Design Impact, our new video course that helps with just that.]]> Finally! After so many years, we’re very happy to launch “How To Measure UX and Design Impact”, our new practical guide for designers and managers on how to set up and track design success in your company — with UX scorecards, UX metrics, the entire workflow and Design KPI trees. Neatly put together by yours truly, Vitaly Friedman. Jump to details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

The Backstory

In many companies, designers are perceived as disruptors, rather than enablers. Designers challenge established ways of working. They ask a lot of questions — much needed ones but also uncomfortable ones. They focus “too much” on user needs, pushing revenue projections back, often with long-winded commitment to testing and research and planning and scoping.

Almost every department in almost every company has their own clearly defined objectives, metrics and KPIs. In fact, most departments — from finance to marketing to HR to sales — are remarkably good at visualizing their impact and making it visible throughout the entire organization.

Designing a KPI tree, an example of how to connect business objectives with design initiatives through the lens of design KPIs. (Large preview)

But as designers, we rarely have a set of established Design KPIs that we regularly report to senior management. We don’t have a clear definition of design success. And we rarely measure the impact of our work once it’s launched. So it’s not surprising that moste parts of the business barely know what we actually do all day long.

Business wants results. It also wants to do more of what has worked in the past. But it doesn’t want to be disrupted — it wants to disrupt. It wants to reduce time to market and minimize expenses; increase revenue and existing business, find new markets. This requires fast delivery and good execution.

And that’s what we are often supposed to be — good “executors”. Or to put differently, “pixel pushers”.

Over years, I’ve been searching for a way to change that. This brought me to Design KPIs and UX scorecards, and a workflow to translate business goals into actionable and measurable design initiatives. I had to find a way to explain, visualize and track that incredible impact that designers have on all parts of business — from revenue to loyalty to support to delivery.

The results of that journey are now public in our new video course: “How To Measure UX and Design Impact” — a practical guide for designers, researchers and UX leads to measure and visualize UX impact on business.

About The Course

The course dives deep into establishing team-specific design KPIs, how to track them effectively, how to set up ownership and integrate metrics in design process. You’ll discover how to translate ambiguous objectives into practical design goals, and how to measure design systems and UX research.

Also, we’ll make sense of OKRs, Top Task Analysis, SUS, UMUX-Lite, UEQ, TPI, KPI trees, feedback scoring, gap analysis, and Kano model — and what UX research methods to choose to get better results. Jump to the table of contents or get your early-bird.

The setup for the video recordings. Once all content is in place, it’s about time to set up the recording.
<a class="btn btn--large btn--green btn--text-shadow" href=https://measure-ux.com>Jump to the video course →

A practical guide to UX metrics and Design KPIs
8h-video course + live UX training. Free preview.

  • 25 chapters (8h), with videos added/updated yearly
  • Free preview, examples, templates, workflows
  • No subscription: get once, access forever
  • Life-time access to all videos, slides, checklists.
  • Add-on: live UX training, running 2× a year
  • Use the code SMASHING to get 20% off today
  • Jump to the details →
Table of Contents

25 chapters, 8 hours, with practical examples, exercises, and everything you need to master the art of measuring UX and design impact. Don’t worry, even if it might seem overwhelming at first, we’ll explore things slowly and thoroughly. Taking 1–2 sessions per week is a perfectly good goal to aim for.

We can’t improve without measuring. That’s why our new video course gives you the tools you need to make sense of it all: user needs, just like business needs. (View large version)
1. Welcome
+

So, how do we measure UX? Well, let’s find out! Meet a friendly welcome message to the video course, outlining all the fine details we’ll be going through: design impact, business metrics, design metrics, surveys, target times and states, measuring UX in B2B and enterprises, design KPI trees, Kano model, event storming, choosing metrics, reporting design success — and how to measure design systems and UX research efforts.

Keywords:
Design impact, UX metrics, business goals, articulating design value, real-world examples, showcasing impact, evidence-driven design.

2. Design Impact
+

In this segment, we’ll explore how and where we, as UX designers, make an impact within organizations. We’ll explore where we fit in the company structure, how to build strong relationships with colleagues, and how to communicate design value in business terms.

Keywords:
Design impact, design ROI, orgcharts, stakeholder engagement, business language vs. UX language, Double Diamond vs. Reverse Double Diamond, risk mitigation.

3. OKRs and Business Metrics
+

We’ll explore the key business terms and concepts related to measuring business performance. We’ll dive into business strategy and tactics, and unpack the components of OKRs (Objectives and Key Results), KPIs, SMART goals, and metrics.

Keywords:
OKRs, objectives, key results, initiatives, SMART goals, measurable goals, time-bound metrics, goal-setting framework, business objectives.

4. Leading And Lagging Indicators
+

Businesses often speak of leading and lagging indicators — predictive and retrospective measures of success. Let’s explore what they are and how they are different — and how we can use them to understand the immediate and long-term impact of our UX work.

Keywords:
Leading vs. lagging indicators, cause-and-effect relationship, backwards-looking and forward-looking indicators, signals for future success.

5. Business Metrics, NPS
+

We dive into the world of business metrics, from Monthly Active Users (MAU) to Monthly Recurring Revenue (MRR) to Customer Lifetime Value (CLV), and many other metrics that often find their way to dashboards of senior management.

Also, almost every business measures NPS. Yet NPS has many limitations, requires a large sample size to be statistically reliable, and what people say and what people do are often very different things. Let’s see what we as designers can do with NPS, and how it relates to our UX work.

Keywords:
Business metrics, MAU, MRR, ARR, CLV, ACV, Net Promoter Score, customer loyalty.


6. Business Metrics, CSAT, CES
+

We’ll explore the broader context of business metrics, including revenue-related measures like Monthly Recurring Revenue (MRR) and Annual Recurring Revenue (ARR), Customer Lifetime Value (CLV), and churn rate.

We’ll also dive into Customer Satisfaction Score (CSAT) and Customer Effort Score (CES). We’ll discuss how these metrics are calculated, their importance in measuring customer experience, and how they complement other well-known (but not necessarily helpful) business metrics like NPS.

Keywords:
Customer Lifetime Value (CLV), churn rate, Customer Satisfaction Score (CSAT), Customer Effort Score (CES), Net Promoter Score (NPS), Monthly Recurring Revenue (MRR), Annual Recurring Revenue (ARR).

7. Feedback Scoring and Gap Analysis
+

If you are looking for a simple alternative to NPS, feedback scoring and gap analysis might be a neat little helper. It transforms qualitative user feedback into quantifiable data, allowing us to track UX improvements over time. Unlike NPS, which focuses on future behavior, feedback scoring looks at past actions and current perceptions.

Keywords:
Feedback scoring, gap analysis, qualitative feedback, quantitative data.

8. Design Metrics (TPI, SUS, SUPR-Q)
+

We’ll explore the landscape of established and reliable design metrics for tracking and capturing UX in digital products. From task success rate and time on task to System Usability Scale (SUS) to Standardized User Experience Percentile Rank Questionnaire (SUPR-Q) to Accessible Usability Scale (AUS), with an overview of when and how to use each, the drawbacks, and things to keep in mind.

Keywords:
UX metrics, KPIs, task success rate, time on task, error rates, error recovery, SUS, SUPR-Q.

9. Design Metrics (UMUX-Lite, SEQ, UEQ)
+

We’ll continue with slightly shorter alternatives to SUS and SUPR-Q that could be used in a quick email survey or an in-app prompt — UMUX-Lite and Single Ease Question (SEQ). We’ll also explore the “big behemoths” of UX measurements — User Experience Questionnaire (UEQ), Google’s HEART framework, and custom UX measurement surveys — and how to bring key metrics together in one simple UX scorecard tailored to your product’s unique needs.

Keywords:
UX metrics, UMUX-Lite, Single Ease Question (SEQ), User Experience Questionnaire (UEQ), HEART framework, UEQ, UX scorecards.

10. Top Tasks Analysis
+

The most impactful way to measure UX is to study how successful users are in completing their tasks in their common customer journeys. With top tasks analysis, we focus on what matters, and explore task success rates and time on task. We need to identify representative tasks and bring 15–18 users in for testing. Let’s dive into how it all works and some of the important gotchas and takeaways to consider.

Keywords:
Top task analysis, UX metrics, task success rate, time on task, qualitative testing, 80% success, statistical reliability, baseline testing.

11. Surveys and Sample Sizes
+

Designing good surveys is hard! We need to be careful on how we shape our questions to avoid biases, how to find the right segment of audience and large enough sample size, how to provide high confidence levels and low margins of errors. In this chapter, we review best practices and a cheat sheet for better survey design — along with do’s and don’ts on question types, rating scales, and survey pre-testing.

Keywords:
Survey design, question types, rating scales, survey length, pre-testing, response rates, statistical significance, sample quality, mean vs. median scores.

12. Measuring UX in B2B and Enterprise
+

Best measurements come from testing with actual users. But what if you don’t have access to any users? Perhaps because of NDA, IP concerns, lack of clearance, poor availability, and high costs of customers and just lack of users? Let’s explore how we can find a way around such restrictive environments, how to engage with stakeholders, and how we can measure efficiency, failures — and set up UX KPI programs.

Keywords:
B2B, enterprise UX, limited access to users, compliance, legacy systems, compliance, desk research, stakeholder engagement, testing proxies, employee’s UX.

13. Design KPI Trees
+

To visualize design impact, we need to connect high-level business objectives with specific design initiatives. To do that, we can build up and present Design KPI trees. From the bottom up, the tree captures user needs, pain points, and insights from research, which inform design initiatives. For each, we define UX metrics to track the impact of these initiatives, and they roll up to higher-level design and business KPIs. Let’s explore how it all works in action and how you can use it in your work.

Keywords:
User needs, UX metrics, KPI trees, sub-trees, design initiatives, setting up metrics, measuring and reporting design impact, design workflow, UX metrics graphs, UX planes.

14. Event Storming
+

How do we choose the right metrics? Well, we don’t start with metrics. We start by identifying most critical user needs and assess the impact of meeting user needs well. To do that, we apply event storming by mapping critical user’s success moments as they interact with a digital product. Our job, then, is to maximize success, remove frustrations, and pave a clear path towards a successful outcome — with event storming.

Keywords:
UX mapping, customer journey maps, service blueprints, event storming, stakeholder alignment, collaborative mapping, UX lanes, critical events, user needs vs. business goals.

15. Kano Model and Survey
+

Once we have a business objective in front of us, we need to choose design initiatives that are most likely to drive the impact that we need to enable with our UX work. To test how effective our design ideas are, we can map them against a Kano model und run a concept testing survey. It gives us a user’s sentiment that we then need to weigh against business priorities. Let’s see how to do just that.

Keywords:
Feature prioritization, threshold attributes, performance attributes, excitement attributes, user’s sentiment, mapping design ideas, boosting user’s satisfaction.

16. Design Process
+

How do we design a KPI tree from scratch? We start by running a collaborative event storming to identify key success moments. Then we prioritize key events and explore how we can amplify and streamline them. Then we ideate and come up with design initiatives. These initiatives are stress tested in an impact-effort matrix for viability and impact. Eventually, we define and assign metrics and KPIs, and pull them together in a KPI tree. Here’s how it works from start till the end.

Keywords:
Uncovering user needs, impact-effort matrix, concept testing, event storming, stakeholder collaboration, traversing the KPI tree.

17. Choosing The Right Metrics
+

Should we rely on established UX metrics such as SUS, UMUX-Lite, and SUPR-Q, or should we define custom metrics tailored to product and user needs? We need to find a balance between the two. It depends on what we want to measure, what we actually can measure, and whether we want to track local impact for a specific change or global impact for the entire customer journey. Let’s figure out how to define and establish metrics that actually will help us track our UX success.

Keywords:
Local vs. global KPIs, time spans, percentage vs. absolute values, A/B testing, mapping between metrics and KPIs, task breakdown, UX lanes, naming design KPIs.

18. Design KPIs Examples
+

Different contexts will require different design KPIs. In this chapter, we explore a diverse set of UX metrics related to search quality (quality of search for top 100 search queries), form design (error frequency, accuracy), e-commerce (time to final price), subscription-based services (time to tier boundaries), customer support (service desk inquiries) and many others. This should give you a good starting point to build upon for your own product and user needs.

Keywords:
Time to first success, search results quality, form error recovery, password recovery rate, accessibility coverage, time to tier boundaries, service desk inquiries, fake email frequency, early drop-off rate, carbon emissions per page view, presets and templates usage, default settings adoption, design system health.

19. UX Strategy
+

Establishing UX metrics doesn’t happen over night. You need to discuss and decide what you want to measure and how often it should happen. But also how to integrate metrics, evaluate data, and report findings. And how to embed them into an existing design workflow. For that, you will need time — and green lights from your stakeholders and managers. To achieve that, we need to tap into the uncharted waters of UX strategy. Let’s see what it involves for us and how to make progress there.

Keywords:
Stakeholder engagement, UX maturity, governance, risk mitigation, integration, ownership, accountability, viability.

20. Reporting Design Success
+

Once you’ve established UX metrics, you will need to report them repeatedly to the senior management. How exactly would you do that? In this chapter, we explore the process of selecting representative tasks, recruiting participants, facilitating testing sessions, and analyzing the resulting data to create a compelling report and presentation that will highlight the value of your UX efforts to stakeholders.

Keywords:
Data analysis, reporting, facilitation, observation notes, video clips, guidelines and recommendations, definition of design success, targets, alignment, and stakeholder’s buy-in.

21. Target Times and States
+

To show the impact of our design work, we need to track UX snapshots. Basically, it’s four states, mapped against touch points in a customer journey: baseline (threshold not to cross), current state (how we are currently doing), target state (objective we are aiming for), and industry benchmark (to stay competitive). Let’s see how it would work in an actual project.

Keywords:
Competitive benchmarking, baseline measurement, local and global design KPIs, cross-teams metrics, setting realistic goals.

22. Measuring Design Systems
+

How do we measure the health of a design system? Surely it’s not just a roll-out speed for newly designed UI components or flows. Most teams track productivity and coverage, but we can also go beyond that by measuring relative adoption, efficiency gains (time saved, faster time-to-market, satisfaction score, and product quality). But the best metric is how early designers involve the design system in a conversation during their design work.

Keywords:
Component coverage, decision trees, adoption, efficiency, time to market, user satisfaction, usage analytics, design system ROI, relative adoption.

23. Measuring UX Research
+

Research insights often end up gaining dust in PDF reports stored on remote fringes of Sharepoint. To track the impact of UX research, we need to track outcomes and research-specific metrics. The way to do that is to track UX research impact for UX and business, through organisational learning and engagement, through make-up of research efforts and their reach. And most importantly: amplifying research where we expect the most significant impact. Let’s see what it involves.

Keywords:
Outcome metrics, organizational influence, research-specific metrics, research references, study observers, research formalization, tracking research-initiated product changes.

24. Getting Started
+

So you’ve made it so far! Now, how do you get your UX metrics initiative off the ground? By following small steps heading in the right direction. Small commitments, pilot projects, and design guilds will support and enable your efforts. We just need to define realistic goals and turn UX metrics in a culture of measurement, or simply a way of working. Let’s see how we can do just that.

Keywords:
Pilot projects, UX integration, resource assessment, evidence-driven design, establishing a baseline, culture of measurement.

25. Next Steps
+

Let’s wrap up our journey into UX metrics and Design KPIs and reflect what we have learned. What remains is the first next step: and that would be starting where you are and growing incrementally, by continuously visualizing and explaining your UX impact — however limited it might be — to your stakeholders. This is the last chapter of the course, but the first chapter of your incredible journey that’s ahead of you.

Keywords:
Stakeholder engagement, incremental growth, risk mitigation, user satisfaction, business success.


Who Is The Course For?

This course is tailored for advanced UX practitioners, design leaders, product managers, and UX researchers who are looking for a practical guide to define, establish and track design KPIs, translate business goals into actionable design tasks, and connect business needs with user needs.

What You’ll Learn

By the end of the video course, you’ll have a packed toolbox of practical techniques and strategies on how to define, establish, sell, and measure design KPIs from start to finish — and how to make sure that your design work is always on the right trajectory. You’ll learn:

  • How to translate business goals to UX initiatives,
  • The difference between OKRs, KPIs, and metrics,
  • How to define design success for your company,
  • Metrics and KPIs that businesses typically measure,
  • How to choose the right set of metrics and KPIs,
  • How to establish design KPIs focused on user needs,
  • How to build a comprehensive design KPI tree,
  • How to combine qualitative and quantitative insights,
  • How to choose and prioritize design work,
  • How to track the impact of design work on business goals,
  • How to explain, visualize, and defend design work,
  • How companies define and track design KPIs,
  • How to make a strong case for UX metrics.
Community Matters ❤️

Producing a video course takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. So thank you from the bottom of our hearts! We hope you’ll find the course useful for your work. Happy watching, everyone! 🎉🥳

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Using Multimodal AI Models For Your Applications (Part 3)]]> https://smashingmagazine.com/2024/10/using-multimodal-ai-models-applications-part3/ https://smashingmagazine.com/2024/10/using-multimodal-ai-models-applications-part3/ Fri, 11 Oct 2024 10:00:00 GMT In this third and final part of a three-part series, we’re taking a more streamlined approach to an application that supports vision-language (VLM) and text-to-speech (TTS). This time, we’ll use different models that are designed for all three modalities — images or videos, text, and audio (including speech-to-text) — in one model. These “any-to-any” models make things easier by allowing us to avoid switching between models.

Specifically, we’ll focus on two powerful models: Reka and Gemini 1.5 Pro.

Both models take things to the next level compared to the tools we used earlier. They eliminate the need for separate speech recognition models, providing a unified solution for multimodal tasks. With this in mind, our goal in this article is to explore how Reka and Gemini simplify building advanced applications that handle images, text, and audio all at once.

Overview Of Multimodal AI Models

The architecture of multimodal models has evolved to enable seamless handling of various inputs, including text, images, and audio, among others. Traditional models often require separate components for each modality, but recent advancements in “any-to-any” models like Next-GPT or 4M allow developers to build systems that process multiple modalities within a unified architecture.

Gato, for instance, utilizes a 1.2 billion parameter decoder-only transformer architecture with 24 layers, embedding sizes of 2048 and a hidden size of 8196 in its feed-forward layers. This structure is optimized for general tasks across various inputs, but it still relies on extensive task-specific fine-tuning.

GPT-4o, on the other hand, takes a different approach with training on multiple media types within a single architecture. This means it’s a single model trained to handle a variety of inputs (e.g., text, images, code) without the need for separate systems for each. This training method allows for smoother task-switching and better generalization across tasks.

Similarly, CoDi employs a multistage training scheme to handle a linear number of tasks while supporting input-output combinations across different modalities. CoDi’s architecture builds a shared multimodal space, enabling synchronized generation for intertwined modalities like video and audio, making it ideal for more dynamic multimedia tasks.

Most “any-to-any” models, including the ones we’ve discussed, rely on a few key concepts to handle different tasks and inputs smoothly:

  • Shared representation space
    These models convert different types of inputs — text, images, audio — into a common feature space. Text is encoded into vectors, images into feature maps, and audio into spectrograms or embeddings. This shared space allows the model to process various inputs in a unified way.
  • Attention mechanisms
    Attention layers help the model focus on the most relevant parts of each input, whether it’s understanding the text, generating captions from images, or interpreting audio.
  • Cross-modal interaction
    In many models, inputs from one modality (e.g., text) can guide the generation or interpretation of another modality (e.g., images), allowing for more integrated and cohesive outputs.
  • Pre-training and fine-tuning
    Models are typically pre-trained on large datasets across different types of data and then fine-tuned for specific tasks, enhancing their performance in real-world applications.
Reka Models

Reka is an AI research company that helps developers build powerful applications by offering models for a range of tasks. These tasks include generating text from videos and images, translating speech, and answering complex questions from long multimodal documents. Reka’s models can even write and execute code, providing flexible, real-world solutions for developers.

These are the three main models Reka offers:

  1. Reka Core
    A 67-billion-parameter multimodal language model designed for complex tasks. It supports inputs like images, videos, and texts while excelling in advanced reasoning and coding.
  2. Reka Flash
    A faster model with a 21-billion-parameter, designed for flexibility and rapid performance in multimodal settings.
  3. Reka Edge (PDF)
    A smaller 7-billion-parameter model was built for on-device and low-latency applications, making it efficient for local use and local or latency-sensitive applications.

Reka’s models can be fine-tuned and deployed securely, whether on the cloud, on-premises, or even on-device. Let’s start by testing Reka’s capabilities directly through its playground. This allows us to experiment with its multimodal features without writing any code, providing a hands-on way to see how the models handle various tasks, such as image and video comprehension.

Alright, we’ll kick things off by uploading an image of a diagram outline of the PaliGemma architecture and ask Reka for a detailed explanation.

Can you provide a more detailed explanation of this image?

Here’s what we get from Reka Core:

“This image is a schematic representation of a machine-learning model named PaliGemma. The model appears to be designed for multimodal input, combining both image and text data. At the top, there are two input sources: ‘Image Input’ and ‘Text Input’. The image input is processed by a ‘SigLIP Image Encoder’, which likely stands for a specific type of image encoding technique. The output of this encoder is then subjected to a ‘Linear Projection’, which is a common step in machine learning to transform data into a lower-dimensional space. The text input is directly fed into the model. Both the projected image data and the text input are then concatenated into ‘Concatenated Tokens’, which are then processed by a component labeled ‘Gemma’. The final output of the model is ‘Text Output’, which suggests that the model is designed to generate text based on the input data.”

It’s a clear and detailed breakdown of the PaliGemma architecture. The model accurately identifies image and text inputs, the SigLIP image encoder, and the linear projection step. It also grasps how the image and text inputs are combined and processed to generate text output. I should admit that the response is quite neat, offering a solid explanation of the architecture.

Let’s try a video input instead. We’ll upload the video about Genny, a software, and ask a question to see how well Reka Core handles video comprehension.

What is this video about?
“This video is a screen recording of a group of friends discussing their plans for the weekend.”

Well, it’s not a good answer compared to what LLaVA gave us in the previous article. Now let’s try a longer video, but with the Reka Flash model. The video is about a TTS model called Parler TTS. Instead of asking what the video is about, let’s prompt it like this:

I’m an AI developer, and I’m not sure how this TTS model can help me in my projects. Can you explain how I can use it or what it’s useful for?

Reka Flash generates a solid answer this time:

“Of course! This TTS (Text-to-Speech) model can be incredibly useful for developers, especially those working on projects that involve creating more human-like interactions or providing audio feedback to users. Here are some ways you can leverage this TTS model in your projects:
  1. Accessibility: TTS models can greatly enhance the accessibility of your applications for users with visual impairments or reading difficulties. By providing spoken feedback or narrating content, you can make your applications more inclusive
  2. Voice Assistants: You can integrate this TTS model into voice assistants or chatbots, allowing them to generate more natural-sounding responses. This can improve user experience and make interactions with your bots more engaging.
  3. Multimedia Applications: This TTS model can be used in multimedia applications, such as e-books, educational tools, or games, to provide spoken narration or instructions. This can make the content more engaging and easier to understand.
  4. Audio Notifications: You can use the TTS model to generate audio notifications for your applications, providing users with spoken alerts or reminders. This can be particularly useful for users who may not always have their screens visible.
  5. Language Learning: The TTS model can be used in language learning applications to provide users with correct pronunciation and intonation. It can also generate spoken feedback or narrate lessons, making the learning process more immersive.
  6. Audio Books: You can use the TTS model to generate audio books, allowing users to listen to written content instead of reading it. This can be especially helpful for users who prefer auditory learning or have difficulty reading.
  7. Explainer Videos: The TTS model can be used to generate voiceovers for explainer videos, providing users with spoken narration that explains the content on the screen. This can make the content more engaging and easier to understand.
  8. Audio Feedback: You can use the TTS model to generate audio feedback for users in your applications, such as confirming actions, providing instructions, or giving feedback on their performance. This can make the interactions more natural and intuitive.
In summary, the TTS model can be a valuable tool for developers looking to create more human-like interactions, improve accessibility, and enhance the user experience in various applications.”

It’s a good answer that’s informative and quite practical for developers. Let’s compare what happens when we try the Reka Edge model with a PDF input titled “Meta AI — Responsible Use Guide” (PDF).

As an AI developer, how can I ensure that I am following the responsible AI guidelines outlined in this document?
“To ensure that your use of AI aligns with the principles outlined in this guide, you can follow these steps and best practices:
  1. Familiarize yourself with the principles of responsible AI, such as transparency, accountability, and user empowerment, as outlined in the guide.
  2. Understand the potential impacts of AI on users and society, and be aware of the risks and challenges associated with the development and deployment of AI systems.
  3. Embrace an iterative model-development mindset, continuously striving for improvement and learning from feedback and errors.
  4. Build a responsible AI ecosystem by working with partners, experts, and communities to develop and share best practices, tools, and benchmarking resources.
  5. Follow the guidelines and best practices outlined in the guide for each stage of model development, from data collection and preparation to fine-tuning, evaluation, and deployment.
  6. Consider the specific use case and requirements for your AI system, and tailor your approach accordingly.
  7. Regularly monitor and evaluate the performance and safety of your AI system and make adjustments as needed to ensure it aligns with the principles outlined in the guide.
  8. Be transparent and accountable in your interactions with users and stakeholders, and be open to feedback and suggestions for improvement.
  9. Stay informed about the latest developments and research in the field of AI, and be prepared to adapt your approach as needed to address new challenges and opportunities.
  10. Follow the specific guidelines and best practices outlined in the addendum for Code Llama, such as defining content policies, evaluating and benchmarking models, and addressing input- and output-level risks.”

Reka Edge’s answer is quite thorough and detailed, too!

If you want to dive into coding, start by creating a Reka account and getting an API key. Then you can follow along with this notebook. Try out their Python SDK, and make sure to explore the different models available. It’s a great way to see how these tools work under the hood!

Gemini Models

Gemini 1.5, developed by Google DeepMind, is designed to handle complex tasks efficiently, thanks to the new Mixture-of-Experts (MoE) system. MoE is like packing several specialized models inside one big model. Instead of using the whole network for every task, Gemini 1.5 activates only the parts (or what they call “experts”) that are most relevant. This means Gemini 1.5 can tackle complex tasks with less processing power than a typical monolithic model.

We can explore Gemini’s features in Google AI Studio since it’s already baked right in. We’ll begin by uploading the following image illustrating VLM capabilities.

We’ll ask Gemini to analyze the image and extract details or concepts:

Explain this image. What are the specific details or concepts it represents?
“The image is a flowchart illustrating different types of image understanding tasks with a Vision Language Model.
  • Object Localization: This task aims to identify the precise location of objects within an image. The example question asks, “Is one cat behind another?”
  • Zero-shot Segmentation: This task involves segmenting an image into different regions based on a given label, even if the model has never seen that specific label before. The example question asks to segment “striped cat.”
  • Zero-shot Visual QA: This task requires answering questions about an image, even if the model has not been specifically trained to answer that type of question. The example question asks, “What is the breed of these cats?”
  • One-shot Learning with Instructions: This task involves providing the model with a single example and instruction to learn a new concept and apply it to other images. The example question asks, “Striped cats are called tabby cats. What is the breed of the cats in the image?”

Well, Gemini nailed it — it clearly understood the image and recognized the tasks a VLM handles. This is exactly what you’d expect from a model designed for advanced image analysis.

Now, we’ll upload another image — this time of an Italian dish — and ask Gemini if it can recognize the dishes and list the ingredients needed for them. This type of “food recognition task” will help us test its ability to identify foods and ingredients.

List the different dishes in this image, also from which country, and give me the ingredients I need to make them.
“The image shows an Italian feast with the following dishes:
  1. Seafood Stew: A rich and flavorful stew with seafood like clams, mussels, shrimp, fish, and vegetables in a tomato broth.
  2. Spaghetti with Tomato Sauce: Classic spaghetti pasta tossed in a simple yet flavorful tomato sauce.
  3. Meatballs in Tomato Sauce: Tender and juicy meatballs simmered in a rich tomato sauce.
  4. Lasagna: Layered pasta with meat sauce, cheese, and tomato sauce.
  5. Side Salad: A refreshing side salad with mixed greens, tomatoes, and other vegetables.
  6. Garlic Bread: Toasted bread slices rubbed with garlic and olive oil.
  7. Marinara Sauce: A simple tomato-based sauce.
This is a typical Italian meal with various dishes showcasing the flavors of Italian cuisine.

As for the ingredients, you’ll need tomatoes for many of these dishes, specifically for the sauces and the side salad.”

Well, the model did a good job recognizing the dishes, but it also didn’t give me all the necessary ingredients. Let’s see how it does with videos instead, using the same video we used earlier with the Reka models. This time, the task will focus on action recognition.

What is the purpose of this video, and what key actions or processes are being demonstrated?
“The video is a tutorial about the importance of friendship and how it can impact our lives. It demonstrates the use of a video editing software called “Genmy” to create a video and add text subtitles. The video creator is using Genmy to:
  • Record their voice: This is seen at the top of the screen with the mic icon.
  • Add text to speech: You see the “Text to Speech” option, which is likely being used to generate subtitles or captions for the video.
  • Edit the timeline: They’re working with a video timeline, adding and arranging different elements.
  • Add voice-over: The speaker is adding commentary to the video.
  • Create a visually appealing presentation: They’re using a combination of video footage, text, and voice-over to create an engaging tutorial.”

Well, Gemini gave a clear breakdown of the video, identifying it as a tutorial demonstrating the Genmy video editing software to highlight how it’s used for recording voice, adding text-to-speech, editing video timelines, and allowing voice-over. Impressive!

Let’s upload one more video, the same one we used for Reka about the Parler TTS model, and ask Gemini to analyze and summarize the contents.

Summarize this video and explain how the contents might be relevant or helpful for software engineers.
“This video showcases different voice samples generated by a text-to-speech (TTS) system called “Parler-TTS.” The video highlights various voice characteristics like gender, pace, pitch, and even the simulated environment. For software engineers, this video demonstrates the progress made in TTS technology. These are some insights:
  • Voice Variety: The video shows that Parler-TTS can create diverse voices, potentially useful for applications needing different personas or tones.
  • Fine-grained Control: The video emphasizes the ability to control voice parameters like pace and pitch, showcasing the potential for customizability.
  • Quality and Naturalness: The video focuses on the quality of the generated speech, demonstrating the advancements made in speech synthesis, which is crucial for user experience.”

Nicely done! I can go with that answer. Gemini explains adjusting voice settings, like pitch and speed, and how having different voices can be useful. Gemini also emphasizes the importance of natural, high-quality speech, which is handy for developers working with TTS systems!

Alright, for coding, you can grab the code from Google AI Studio by clicking the Get Code button. You can choose between formatting the code in Python, Swift, and Java, among other languages.

Conclusion

Both Reka and Gemini are strong multimodal models for AI applications, but there are key differences between them to consider. Here’s a table that breaks those down:

Feature Reka Gemini 1.5
Multimodal Capabilities Image, video, and text processing Image, video, text, with extended token context
Efficiency Optimized for multimodal tasks Built with MoE for efficiency
Context Window Standard token window Up to two million tokens (with Flash variant)
Architecture Focused on multimodal task flow MoE improves specialization
Training/Serving High performance with efficient model switching More efficient training with MoE architecture
Deployment Supports on-device deployment Primarily cloud-based, with Vertex AI integration
Use Cases Interactive apps, edge deployment Suited for large-scale, long-context applications
Languages Supported Multiple languages Supports many languages with long context windows

Reka stands out for on-device deployment, which is super useful for apps requiring offline capabilities or low-latency processing.

On the other hand, Gemini 1.5 Pro shines with its long context windows, making it a great option for handling large documents or complex queries in the cloud.

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[Build A Static RSS Reader To Fight Your Inner FOMO]]> https://smashingmagazine.com/2024/10/build-static-rss-reader-fight-fomo/ https://smashingmagazine.com/2024/10/build-static-rss-reader-fight-fomo/ Mon, 07 Oct 2024 13:00:00 GMT In a fast-paced industry like tech, it can be hard to deal with the fear of missing out on important news. But, as many of us know, there’s an absolutely huge amount of information coming in daily, and finding the right time and balance to keep up can be difficult, if not stressful. A classic piece of technology like an RSS feed is a delightful way of taking back ownership of our own time. In this article, we will create a static Really Simple Syndication (RSS) reader that will bring you the latest curated news only once (yes: once) a day.

We’ll obviously work with RSS technology in the process, but we’re also going to combine it with some things that maybe you haven’t tried before, including Astro (the static site framework), TypeScript (for JavaScript goodies), a package called rss-parser (for connecting things together), as well as scheduled functions and build hooks provided by Netlify (although there are other services that do this).

I chose these technologies purely because I really, really enjoy them! There may be other solutions out there that are more performant, come with more features, or are simply more comfortable to you — and in those cases, I encourage you to swap in whatever you’d like. The most important thing is getting the end result!

The Plan

Here’s how this will go. Astro generates the website. I made the intentional decision to use a static site because I want the different RSS feeds to be fetched only once during build time, and that’s something we can control each time the site is “rebuilt” and redeployed with updates. That’s where Netlify’s scheduled functions come into play, as they let us trigger rebuilds automatically at specific times. There is no need to manually check for updates and deploy them! Cron jobs can just as readily do this if you prefer a server-side solution.

During the triggered rebuild, we’ll let the rss-parser package do exactly what it says it does: parse a list of RSS feeds that are contained in an array. The package also allows us to set a filter for the fetched results so that we only get ones from the past day, week, and so on. Personally, I only render the news from the last seven days to prevent content overload. We’ll get there!

But first...

What Is RSS?

RSS is a web feed technology that you can feed into a reader or news aggregator. Because RSS is standardized, you know what to expect when it comes to the feed’s format. That means we have a ton of fun possibilities when it comes to handling the data that the feed provides. Most news websites have their own RSS feed that you can subscribe to (this is Smashing Magazine’s RSS feed: https://www.smashingmagazine.com/feed/). An RSS feed is capable of updating every time a site publishes new content, which means it can be a quick source of the latest news, but we can tailor that frequency as well.

RSS feeds are written in an Extensible Markup Language (XML) format and have specific elements that can be used within it. Instead of focusing too much on the technicalities here, I’ll give you a link to the RSS specification. Don’t worry; that page should be scannable enough for you to find the most pertinent information you need, like the kinds of elements that are supported and what they represent. For this tutorial, we’re only using the following elements: <title>, <link>, <description>, <item>, and <pubDate>. We’ll also let our RSS parser package do some of the work for us.

Creating The State Site

We’ll start by creating our Astro site! In your terminal run pnpm create astro@latest. You can use any package manager you want — I’m simply trying out pnpm for myself.

After running the command, Astro’s chat-based helper, Houston, walks through some setup questions to get things started.

 astro   Launch sequence initiated.

   dir   Where should we create your new project?
         ./rss-buddy

  tmpl   How would you like to start your new project?
         Include sample files

    ts   Do you plan to write TypeScript?
         Yes

   use   How strict should TypeScript be?
         Strict

  deps   Install dependencies?
         Yes

   git   Initialize a new git repository?
         Yes

I like to use Astro’s sample files so I can get started quickly, but we’re going to clean them up a bit in the process. Let’s clean up the src/pages/index.astro file by removing everything inside of the <main></main> tags. Then we’re good to go!

From there, we can spin things by running pnpm start. Your terminal will tell you which localhost address you can find your site at.

Pulling Information From RSS feeds

The src/pages/index.astro file is where we will make an array of RSS feeds we want to follow. We will be using Astro’s template syntax, so between the two code fences (---), create an array of feedSources and add some feeds. If you need inspiration, you can copy this:

const feedSources = [
  'https://www.smashingmagazine.com/feed/',
  'https://developer.mozilla.org/en-US/blog/rss.xml',
  // etc.
]

Now we’ll install the rss-parser package in our project by running pnpm install rss-parser. This package is a small library that turns the XML that we get from fetching an RSS feed into JavaScript objects. This makes it easy for us to read our RSS feeds and manipulate the data any way we want.

Once the package is installed, open the src/pages/index.astro file, and at the top, we’ll import the rss-parser and instantiate the Partner class.

import Parser from 'rss-parser';
const parser = new Parser();

We use this parser to read our RSS feeds and (surprise!) parse them to JavaScript. We’re going to be dealing with a list of promises here. Normally, I would probably use Promise.all(), but the thing is, this is supposed to be a complicated experience. If one of the feeds doesn’t work for some reason, I’d prefer to simply ignore it.

Why? Well, because Promise.all() rejects everything even if only one of its promises is rejected. That might mean that if one feed doesn’t behave the way I’d expect it to, my entire page would be blank when I grab my hot beverage to read the news in the morning. I do not want to start my day confronted by an error.

Instead, I’ll opt to use Promise.allSettled(). This method will actually let all promises complete even if one of them fails. In our case, this means any feed that errors will just be ignored, which is perfect.

Let’s add this to the src/pages/index.astro file:

interface FeedItem {
  feed?: string;
  title?: string;
  link?: string;
  date?: Date;
}

const feedItems: FeedItem[] = [];

await Promise.allSettled(
  feedSources.map(async (source) => {
    try {
      const feed = await parser.parseURL(source);
      feed.items.forEach((item) => {
        const date = item.pubDate ? new Date(item.pubDate) : undefined;

          feedItems.push({
            feed: feed.title,
            title: item.title,
            link: item.link,
            date,
          });
      });
    } catch (error) {
      console.error(Error fetching feed from ${source}:, error);
    }
  })
);

This creates an array (or more) named feedItems. For each URL in the feedSources array we created earlier, the rss-parser retrieves the items and, yes, parses them into JavaScript. Then, we return whatever data we want! We’ll keep it simple for now and only return the following:

  • The feed title,
  • The title of the feed item,
  • The link to the item,
  • And the item’s published date.

The next step is to ensure that all items are sorted by date so we’ll truly get the “latest” news. Add this small piece of code to our work:

const sortedFeedItems = feedItems.sort((a, b) => (b.date ?? new Date()).getTime() - (a.date ?? new Date()).getTime());

Oh, and... remember when I said I didn’t want this RSS reader to render anything older than seven days? Let’s tackle that right now since we’re already in this code.

We’ll make a new variable called sevenDaysAgo and assign it a date. We’ll then set that date to seven days ago and use that logic before we add a new item to our feedItems array.

This is what the src/pages/index.astro file should now look like at this point:

---
import Layout from '../layouts/Layout.astro';
import Parser from 'rss-parser';
const parser = new Parser();

const sevenDaysAgo = new Date();
sevenDaysAgo.setDate(sevenDaysAgo.getDate() - 7);

const feedSources = [
  'https://www.smashingmagazine.com/feed/',
  'https://developer.mozilla.org/en-US/blog/rss.xml',
]

interface FeedItem {
  feed?: string;
  title?: string;
  link?: string;
  date?: Date;
}

const feedItems: FeedItem[] = [];

await Promise.allSettled(
  feedSources.map(async (source) => {
    try {
      const feed = await parser.parseURL(source);
      feed.items.forEach((item) => {
        const date = item.pubDate ? new Date(item.pubDate) : undefined;
        if (date && date >= sevenDaysAgo) {
          feedItems.push({
            feed: feed.title,
            title: item.title,
            link: item.link,
            date,
          });
        }
      });
    } catch (error) {
      console.error(Error fetching feed from ${source}:, error);
    }
  })
);

const sortedFeedItems = feedItems.sort((a, b) => (b.date ?? new Date()).getTime() - (a.date ?? new Date()).getTime());

---

<Layout title="Welcome to Astro.">
  <main>
  </main>
</Layout>
Rendering XML Data

It’s time to show our news articles on the Astro site! To keep this simple, we’ll format the items in an unordered list rather than some other fancy layout.

All we need to do is update the <Layout> element in the file with the XML objects sprinkled in for a feed item’s title, URL, and publish date.

<Layout title="Welcome to Astro.">
  <main>
  {sortedFeedItems.map(item => (
    <ul>
      <li>
        <a href={item.link}>{item.title}</a>
        <p>{item.feed}</p>
        <p>{item.date}</p>
      </li>
    </ul>
  ))}
  </main>
</Layout>

Go ahead and run pnpm start from the terminal. The page should display an unordered list of feed items. Of course, everything is styled at the moment, but luckily for you, you can make it look exactly like you want with CSS!

And remember that there are even more fields available in the XML for each item if you want to display more information. If you run the following snippet in your DevTools console, you’ll see all of the fields you have at your disposal:

feed.items.forEach(item => {}
Scheduling Daily Static Site Builds

We’re nearly done! The feeds are being fetched, and they are returning data back to us in JavaScript for use in our Astro page template. Since feeds are updated whenever new content is published, we need a way to fetch the latest items from it.

We want to avoid doing any of this manually. So, let’s set this site on Netlify to gain access to their scheduled functions that trigger a rebuild and their build hooks that do the building. Again, other services do this, and you’re welcome to roll this work with another provider — I’m just partial to Netlify since I work there. In any case, you can follow Netlify’s documentation for setting up a new site.

Once your site is hosted and live, you are ready to schedule your rebuilds. A build hook gives you a URL to use to trigger the new build, looking something like this:

https://api.netlify.com/build_hooks/your-build-hook-id

Let’s trigger builds every day at midnight. We’ll use Netlify’s scheduled functions. That’s really why I’m using Netlify to host this in the first place. Having them at the ready via the host greatly simplifies things since there’s no server work or complicated configurations to get this going. Set it and forget it!

We’ll install @netlify/functions (instructions) to the project and then create the following file in the project’s root directory: netlify/functions/deploy.ts.

This is what we want to add to that file:

// netlify/functions/deploy.ts

import type { Config } from '@netlify/functions';

const BUILD_HOOK =
  'https://api.netlify.com/build_hooks/your-build-hook-id'; // replace me!

export default async (req: Request) => {
  await fetch(BUILD_HOOK, {
    method: 'POST',
  })
};

export const config: Config = {
  schedule: '0 0 * * *',
};

If you commit your code and push it, your site should re-deploy automatically. From that point on, it follows a schedule that rebuilds the site every day at midnight, ready for you to take your morning brew and catch up on everything that you think is important.

]]>
hello@smashingmagazine.com (Karin Hendrikse)
<![CDATA[How A Bottom-Up Design Approach Enhances Site Accessibility]]> https://smashingmagazine.com/2024/10/how-bottom-up-design-approach-enhances-site-accessibility/ https://smashingmagazine.com/2024/10/how-bottom-up-design-approach-enhances-site-accessibility/ Fri, 04 Oct 2024 09:00:00 GMT Accessibility is key in modern web design. A site that doesn’t consider how its user experience may differ for various audiences — especially those with disabilities — will fail to engage and serve everyone equally. One of the best ways to prevent this is to approach your site from a bottom-up perspective.

Understanding Bottom-Up Design

Conventional, top-down design approaches start with the big picture before breaking these goals and concepts into smaller details. Bottom-up philosophies, by contrast, consider the minute details first, eventually achieving the broader goal piece by piece.

This alternative way of thinking is important for accessibility in general because it reflects how neurodivergent people commonly think. While non-autistic people tend to think from a top-down perspective, those with autism often employ a bottom-up way of thinking.

Of course, there is considerable variation, and researchers have identified at least three specialist thinking types within the autism spectrum:

  • Visual thinkers who think in images;
  • Pattern thinkers who think of concepts in terms of patterns and relationships;
  • Verbal thinkers who think only in word detail.

Still, research shows that people with autism and ADHD show a bias toward bottom-up thinking rather than the top-down approach you often see in neurotypical users. Consequently, a top-down strategy means you may miss details your audience may notice, and your site may not feel easily usable for all users.

As a real-world example, consider the task of writing an essay. Many students are instructed to start an essay assignment by thinking about the main point they want to convey and then create an outline with points that support the main argument. This is top-down thinking — starting with the big picture of the topic and then gradually breaking down the topic into points and then later into words that articulate these points.

In contrast, someone who uses a bottom-up thinking approach might start an essay with no outline but rather just by freely jotting down every idea that comes to mind as it comes to mind — perhaps starting with one particular idea or example that the writer finds interesting and wants to explore further and branching off from there. Then, once all the ideas have been written out, the writer goes back to group related ideas together and arrange them into a logical outline. This writer starts with the small details of the essay and then works these details into the big picture of the final form.

In web design, in particular, a bottom-up approach means starting with the specifics of the user experience instead of the desired effect. You may determine a readable layout for a single blog post, then ask how that page relates to others and slowly build on these decisions until you have several well-organized website categories.

You may even get more granular. Say you start your site design by placing a menu at the bottom of a mobile site to make it easier to tap with one hand, improving ease of use. Then, you build a drop-down menu around that choice — placing the most popular or needed options at the bottom instead of the top for easy tapping. From there, you may have to rethink larger-scale layouts to work around those interactive elements being lower on the screen, slowly addressing larger categories until you have a finished site design.

In either case, the idea of bottom-up design is to begin with the most specific problems someone might have. You then address them in sequence instead of determining the big picture first.

Benefits Of A Bottom-Up Approach For Accessible Design

While neither bottom-up nor top-down approaches dominate the industry, some web engineers prefer the bottom-up approach due to the various accessibility benefits this process provides. This strategy has several accessibility benefits.

Putting User Needs First

The biggest benefit of bottom-up methods is that they prioritize the user’s needs.

Top-down approaches seem organized, but they often result in a site that reflects the designer’s choices and beliefs more than it serves your audience.

Consider some of the complaints that social media users have made over the years related to usability and accessibility for the everyday user. For example, many users complain that their Facebook feed will randomly refresh as they scroll for the sake of providing users with the most up-to-date content. However, the feature makes it virtually impossible to get back to a post a user viewed that they didn’t think to save. Likewise, TikTok’s watch history feature has come and gone over the years and still today is difficult for many users to find without viewing an outside tutorial on the subject.

This is a common problem: 95.9% of the largest one million homepages have Web Content Accessibility Guidelines (WCAG) errors. While a bottom-up alternative doesn’t mean you won’t make any mistakes, it may make them less likely, as bottom-up thinking often improves your awareness of new stimuli so you can catch things you’d otherwise overlook. It’s easier to meet user’s needs when you build your entire site around their experience instead of looking at UX as an afterthought.

Consider this example from Berkshire Hathaway, a multi-billion-dollar holding company. The overall design philosophy is understandable: It’s simple and direct, choosing to focus on information instead of fancy aesthetics that may not suit the company image. However, you could argue it loses itself in this big picture.

While it is simple, the lack of menus or color contrast and the small font make it harder to read and a little overwhelming. This confusion can counteract any accessibility benefits of its simplicity.

Alternatively, even a simple website redesign could include intuitive menus, additional contrast, and accessible font for easy navigation across the site.

The homepage for U.K. charity Scope offers a better example of web design centered around users’ needs. Concise, clear menus line the top of the page to aid quicker, easier navigation. The color scheme is simple enough to avoid confusion but provides enough contrast to make everything easy to read — something the sans-serif font further helps.

Ensuring Accessibility From The Start

A top-down method also makes catering to a diverse audience difficult because you may need to shoehorn features into an existing design.

For example, say, a local government agency creates a website focused on providing information and services to a general audience of residents. The site originally featured high-resolution images, bright colors, and interactive charts.

However, they realize the images are not accessible to people navigating the site with screen readers, while multiple layers of submenus are difficult for keyboard-only users. Further, the bright colors make it hard for visually impaired users to read the site’s information.

The agency, realizing these accessibility concerns, adds captions to each image. However, the captions disrupt the originally intended visual aesthetics and user flow. Further, adjusting the bright colors would involve completely rethinking the site’s entire color palette. If these considerations had been made before the site was built, the site build could have specifically accommodated these elements while still creating an aesthetically pleasing and user-friendly result.

Alternatively, a site initially built with high contrast, a calm color scheme, clear typography, simple menus, and reduced imagery would make this site much more accessible to a wide user base from the get-go.

As a real-world example, consider the Awwwards website. There are plenty of menus to condense information and ease navigation without overloading the screen — a solid accessibility choice. However, there does not seem to be consistent thought in these menus’ placement or organization.

There are far too many menus; some are at the top while others are at the bottom, and a scrolling top bar adds further distractions. It seems like Awwwards may have added additional menus as an afterthought to improve navigation. This leads to inconsistencies and crowding because they did not begin with this thought.

In contrast,

Bottom-up alternatives address usability issues from the beginning, which results in a more functional, accessible website.

Redesigning a system to address a usability issue it didn’t originally make room for is challenging. It can lead to errors like broken links and other unintended consequences that may hinder access for other visitors. Some sites have even claimed to lose 90% of their traffic after a redesign. While bottom-up approaches won’t eliminate those possibilities, they make them less likely by centering everything around usage from the start.

The website for the Vasa Museum in Stockholm, Sweden, showcases a more cohesive approach to ensuring accessibility. Like Awwwards, it uses menus to aid navigation and organization, but there seems to be more forethought into these features. All menus are at the top, and there are fewer of them, resulting in less clutter and a faster way to find what you’re looking for. The overall design complements this by keeping things simple and neat so that the menus stand out.

Increasing Awareness

Similarly, bottom-up design ensures you don’t miss as many accessibility concerns. When you start at the top, before determining what details fit within it, you may not consider all the factors that influence it. Beginning with the specifics instead makes it easier to spot and address problems you would’ve missed otherwise.

This awareness is particularly important for serving a diverse population. An estimated 16% of the global population — 1.6 billion people — have a significant disability. That means there’s a huge range of varying needs to account for, but most people lack firsthand experience living with these conditions. Consequently, it’s easy to miss things impacting others’ UX. You can overcome that knowledge gap by asking how everyone can use your site first.

Bottom-Up vs. Top-Down: Which Is Best for You?

As these benefits show, a bottom-up design philosophy can be helpful when building an accessible site. Still, top-down methods can be advantageous at times, too. Which is best depends on your situation.

Top-down approaches are a good way to ensure a consistent brand image, as you start with the overall idea and base future decisions on this concept. It also makes it easier to create a design hierarchy to facilitate decision-making within your team. When anyone has a question, they can turn to whoever is above them or refer to the broader goals. Such organization can also mean faster design processes.

Bottom-up methods, by contrast, are better when accessibility for a diverse audience is your main concern. It may be harder to keep everyone on the same overall design philosophy page, but it usually produces a more functional website. You can catch and solve problems early and pay great attention to detail. However, this can mean longer design cycles, which can incur extra costs.

It may come down to what your team is most comfortable with. People think and work differently, with some preferring a top-down approach while others find bottom-up more natural. Combining the two — starting with a top-down model before tackling updates from a bottom-up perspective — can be beneficial, too.

Strategies For Implementing A Bottom-Up Design Model

Should you decide a bottom-up design method is best for your goals, here are some ways you can embrace this philosophy.

Talk To Your Existing User Base

One of the most important factors in bottom-up web design is to center everything around your users. As a result, your existing user base — whether from a separate part of your business or another site you run — is the perfect place to start.

Survey customers and web visitors about their experience on your sites and others. Ask what pain points they have and what features they’d appreciate. Any commonalities between responses deserve attention. You can also turn to WCAG standards for inspiration on accessible functionality, but first-hand user feedback should form the core of your mission.

Look To Past Projects For Accessibility Gaps

Past sites and business projects can also reveal what specifics you should start with. Look for any accessibility gaps by combing through old customer feedback and update histories and using these sites yourself to find issues. Take note of any barriers or usability concerns to address in your next website.

Remember to document everything you find as you go. Up to 90% of organizations’ data is unstructured, making it difficult to analyze later. Reversing that trend by organizing your findings and recording your accessible design process will streamline future accessibility optimization efforts.

Divide Tasks But Communicate Often

Keep in mind that a bottom-up strategy can be time-consuming. One of the reasons why top-down alternatives are popular is because they’re efficient. You can overcome this gap by splitting tasks between smaller teams. However, these groups must communicate frequently to ensure separate design considerations work as a cohesive whole.

A DevOps approach is helpful here. DevOps has helped 49% of its adopters achieve a faster time to market, and 61% report higher-quality deliverables. It also includes space for both detailed work and team-wide meetings to keep everyone on track. Such benefits ensure you can remain productive in a bottom-up strategy.

Accessible Websites Need A Bottom-Up Design Approach

You can’t overstate the importance of accessible website design. By the same token, bottom-up philosophies are crucial in modern site-building. A detail-oriented approach makes it easier to serve a more diverse audience along several fronts. Making the most of this opportunity will both extend your reach to new niches and make the web a more equitable place.

The Web Accessibility Initiative’s WCAG standards are a good place to start. While these guidelines don’t necessarily describe how to apply a bottom-up approach, they do outline critical user needs and accessibility concerns your design should consider. The site also offers a free and comprehensive Digital Accessibility Foundations course for designers and developers.

Familiarizing yourself with these standards and best practices will make it easier to understand your audience before you begin designing your site. You can then build a more accessible platform from the ground up.

Additionally, the following are some valuable related reads that can act as inspiration in accessibility-centered and user-centric design.

By employing bottom-up thinking as well as resources like these in your design approach, you can create websites that not only meet current accessibility standards but actively encourage site use among users of all backgrounds and abilities.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Eleanor Hecks)
<![CDATA[Interview With Björn Ottosson, Creator Of The Oklab Color Space]]> https://smashingmagazine.com/2024/10/interview-bjorn-ottosson-creator-oklab-color-space/ https://smashingmagazine.com/2024/10/interview-bjorn-ottosson-creator-oklab-color-space/ Wed, 02 Oct 2024 10:00:00 GMT Oklab is a new perceptual color space supported in all major browsers created by the Swedish engineer Björn Ottosson. In this interview, Philip Jägenstedt explores how and why Björn created Oklab and how it spread across the ecosystem.

Note: The original interview was conducted in Swedish and is available to watch.

About Björn

Philip Jägenstedt: Tell me a little about yourself, Björn.

Björn Ottosson: I worked for many years in the game industry on game engines and games like FIFA, Battlefield, and Need for Speed. I've always been interested in technology and its interaction with the arts. I’m an engineer, but I’ve always held both of these interests.

On Working With Color

Philip: For someone who hasn’t dug into colors much, what’s so hard about working with them?

Björn: Intuitively, colors can seem quite simple. A color can be lighter or darker, it can be more blue or more green, and so on. Everyone with typical color vision has a fairly similar experience of color, and this can be modeled.

However, the way we manipulate colors in software usually doesn’t align with human perception of colors. The most common color space is sRGB. There’s also HSL, which is common for choosing colors, but it’s also based on sRGB.

One problem with sRGB is that in a gradient between blue and white, it becomes a bit purple in the middle of the transition. That’s because sRGB really isn’t created to mimic how the eye sees colors; rather, it is based on how CRT monitors work. That means it works with certain frequencies of red, green, and blue, and also the non-linear coding called gamma. It’s a miracle it works as well as it does, but it’s not connected to color perception. When using those tools, you sometimes get surprising results, like purple in the gradient.

On Color Perception

Philip: How do humans perceive color?

Björn: When light enters the eye and hits the retina, it’s processed in many layers of neurons and creates a mental impression. It’s unlikely that the process would be simple and linear, and it’s not. But incredibly enough, most people still perceive colors similarly.

People have been trying to understand colors and have created color wheels and similar visualizations for hundreds of years. During the 20th century, a lot of research and modeling went into color vision. For example, the CIE XYZ model is based on how sensitive our photoreceptor cells are to different frequencies of light. CIE XYZ is still a foundational color space on which all other color spaces are based.

There were also attempts to create simple models matching human perception based on XYZ, but as it turned out, it’s not possible to model all color vision that way. Perception of color is incredibly complex and depends, among other things, on whether it is dark or light in the room and the background color it is against. When you look at a photograph, it also depends on what you think the color of the light source is. The dress is a typical example of color vision being very context-dependent. It is almost impossible to model this perfectly.

Models that try to take all of this complexity into account are called color appearance models. Although they have many applications, they’re not that useful if you don’t know if the viewer is in a light or bright room or other viewing conditions.

The odd thing is that there’s a gap between the tools we typically use — such as sRGB and HSL — and the findings of this much older research. To an extent, this makes sense because when HSL was developed in the 1970s, we didn’t have much computing power, so it’s a fairly simple translation of RGB. However, not much has changed since then.

We have a lot more processing power now, but we’ve settled for fairly simple tools for handling colors in software.

Display technology has also improved. Many displays now have different RGB primaries, i.e., a redder red, greener green, or bluer blue. sRGB cannot reach all colors available on these displays. The new P3 color space can, but it's very similar to sRGB, just a little wider.

On Creating Oklab

Philip: What, then, is Oklab, and how did you create it?

Björn: When working in the game industry, sometimes I wanted to do simple color manipulations like making a color darker or changing the hue. I researched existing color spaces and how good they are at these simple tasks and concluded that all of them are problematic in some way.

Many people know about CIE Lab. It’s quite close to human perception of color, but the handling of hue is not great. For example, a gradient between blue and white turns out purple in CIE Lab, similar to in sRGB. Some color spaces handle hue well but have other issues to consider.

When I left my job in gaming to pursue education and consulting, I had a bit of time to tackle this problem. Oklab is my attempt to find a better balance, something Lab-like but “okay”.

I based Oklab on two other color spaces, CIECAM16 and IPT. I used the lightness and saturation prediction from CIECAM16, which is a color appearance model, as a target. I actually wanted to use the datasets used to create CIECAM16, but I couldn’t find them.

IPT was designed to have better hue uniformity. In experiments, they asked people to match light and dark colors, saturated and unsaturated colors, which resulted in a dataset for which colors, subjectively, have the same hue. IPT has a few other issues but is the basis for hue in Oklab.

Using these three datasets, I set out to create a simple color space that would be “okay”. I used an approach quite similar to IPT but combined it with the lightness and saturation estimates from CIECAM16. The resulting Oklab still has good hue uniformity but also handles lightness and saturation well.

Philip: How about the name Oklab? Why is it just okay?

Björn: This is a bit tongue-in-cheek and some amount of humility.

For the tasks I had in mind, existing color spaces weren’t okay, and my goal was to make one that is. At the same time, it is possible to delve deeper. If a university had worked on this, they could have run studies with many participants. For a color space intended mainly for use on computer and phone screens, you could run studies in typical environments where they are used. It’s possible to go deeper.

Nevertheless, I took the datasets I could find and made the best of what I had. The objective was to make a very simple model that’s okay to use. And I think it is okay, and I couldn’t come up with anything better. I didn’t want to call it Björn Ottosson Lab or something like that, so I went with Oklab.

Philip: Does the name follow a tradition of calling things okay? I know there’s also a Quite OK Image format.

Björn: No, I didn’t follow any tradition here. Oklab was just the name I came up with.

On Oklab Adoption

Philip: I discovered Oklab when it suddenly appeared in all browsers. Things often move slowly on the web, but in this case, things moved very quickly. How did it happen?

Björn: I was surprised, too! I wrote a blog post and shared it on Twitter.

I have a lot of contacts in the gaming industry and some contacts in the Visual Effects (VFX) industry. I expected that people working with shaders or visual effects might try this out, and maybe it would be used in some games, perhaps as an effect for a smooth color transition.

But the blog post was spread much more widely than I thought. It was on Hacker News, and many people read it.

The code for Oklab is only 10 lines long, so many open-source libraries have adopted it. This all happened very quickly.

Chris Lilley from the W3C got in touch and asked me some questions about Oklab. We discussed it a bit, and I explained how it works and why I created it. He gave a presentation at a conference about it, and then he pushed for it to be added to CSS.

Photoshop also changed its gradients to use Oklab. All of this happened organically without me having to cheer it on.

On Okhsl

Philip: In another blog post, you introduced two other color spaces, Okhsv and Okhsl. You’ve already talked about HSL, so what is Okhsl?

Björn: When picking colors, HSL has a big advantage, which is that the parameter space is simple. Any value 0-360 for hue (H) together with any values 0-1 for saturation (S) and lightness (L) are valid combinations and result in different colors on screen. The geometry of HSL is a cylinder, and there’s no way to end up outside that cylinder accidentally.

By contrast, Oklab contains all physically possible colors, but there are combinations of values that don’t work where you reach colors that don’t exist. For example, starting from light and saturated yellow in Oklab and rotating the hue to blue, that blue color does not exist in sRGB; there are only darker and less saturated blues. That’s because sRGB in Oklab has a strange shape, so it’s easy to end up going outside it. This makes it difficult to select and manipulate colors with Oklab or Oklch.

Okhsl was an attempt at compromise. It maintains Oklab’s behavior for colors that are not very saturated, close to gray, and beyond that, stretches out to a cylinder that contains all of sRGB. Another way to put it is that the strange shape of sRGB in Oklab has been stretched into a cylinder with reasonably smooth transitions.

The result is similar to HSL, where all parameters can be changed independently without ending up outside sRGB. It also makes Okhsl more complicated than Oklab. There are unavoidable compromises to get something with the characteristics that HSL has.

Everything with color is about compromises. Color vision is so complex that it's about making practical compromises.

This is an area where I wish there were more research. If I have a white background and want to pick some nice colors to put on it, then you can make a lot of assumptions. Okhsl solves many things, but is it possible to do even better?

On Color Compromises

Philip: Some people who have tried Oklab say there are too many dark shades. You changed that in Okhsl with a new lightness estimate.

Björn: This is because Oklab is exposure invariant and doesn’t account for viewing conditions, such as the background color. On the web, there’s usually a white background, which makes it harder to see the difference between black and other dark colors. But if you look at the same gradient on a black background, the difference is more apparent.

CIE Lab handles this, and I tried to handle it in Okhsl, too. So, gradients in Okhsl look better on a white background, but there will be other issues on a black background. It’s always a compromise.

And, Finally…

Philip: Final question: What’s your favorite color?

Björn: I would have to say Burgundy. Burgundy, dark greens, and navy blues are favorites.

Philip: Thank you for your time, Björn. I hope our readers have learned something, and I’ll remind them of your excellent blog, where you go into more depth about Oklab and Okhsl.

Björn: Thank you!

]]>
hello@smashingmagazine.com (Philip Jägenstedt)
<![CDATA[Crows, Ghosts, And Autumn Bliss (October 2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/09/desktop-wallpaper-calendars-october-2024/ https://smashingmagazine.com/2024/09/desktop-wallpaper-calendars-october-2024/ Mon, 30 Sep 2024 09:00:00 GMT The leaves are shining in the most beautiful colors and pumpkins are taking over the front porches. It’s time to welcome the spookiest of all months: October! To get your desktop ready for fall and the upcoming Halloween season, artists and designers from across the globe once again got their ideas flowing and designed inspiring wallpapers for you to indulge in.

The wallpapers in this post come in versions with and without a calendar for October 2024 and can be downloaded for free. And since so many beautiful and unique designs evolve around our little wallpapers challenge every month (we’ve been running it for more than 13 years already, can you believe it?!), we also added some timeless October treasures from our wallpapers archives to the collection. Maybe you’ll spot one of your almost-forgotten favorites in here, too?

A huge thank you to everyone who shared their wallpapers with us this month — this post wouldn’t exist without you. Happy October!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Happy Halloween

Designed by Ricardo Gimenes from Spain.

Reptile Awareness Day

“Let’s celebrate reptiles and raise awareness of their vital role in ecosystems. Many species face threats, so let’s learn, appreciate, and protect these incredible creatures and their habitats!” — Designed by PopArt Studio from Serbia.

Make Today A Good Day

“‘Make today a good day’ is a simple yet powerful reminder to take control of the present moment. It emphasizes that our attitude and actions shape our experience, encouraging positivity and purpose. Each day brings new opportunities, and by choosing to make it good, we invite growth, joy, and fulfilment into our lives.” — Designed by Hitesh Puri from Delhi, India.

The Dungeon Master

Designed by Ricardo Gimenes from Spain.

Happy Dussehra

“I was inspired by Dussehra’s rich symbolism and cultural significance while creating this design. The festival celebrates the triumph of good over evil. The bow and arrow become the central focus, while the bold red background, golden accents, and the temple’s silhouette add a sense of grandeur and spirituality.” — Designed by Cronix from the United States.

The Crow And The Ghosts

“If my heart were a season, it would be autumn.” — Designed by Lívia Lénárt from Hungary.

The Night Drive

Designed by Vlad Gerasimov from Georgia.

Autumn’s Splendor

“The transition to autumn brings forth a rich visual tapestry of warm colors and falling leaves, making it a natural choice for a wallpaper theme.” — Designed by Farhan Srambiyan from India.

National Fossil Day

“Join us in commemorating National Fossil Day, a day dedicated to honoring the wonders of Earth’s prehistoric past. On this special day, we invite you to step back in time and explore the remarkable world of fossils. These ancient remnants of life on our planet offer a glimpse into the evolution of life, from the tiniest microorganisms to the towering giants that once roamed the Earth.” — Designed by PopArt Studio from Serbia.

Magical October

“‘I’m so glad I live in a world where there are Octobers.’ (L. M. Montgomery, Anne of Green Gables)” — Designed by Lívi Lénárt from Hungary.

Bird Migration Portal

“October is a significant month for me because it is when my favorite type of bird travels south. For that reason I have chosen to write about the swallow. When I was young, I had a bird’s nest not so far from my room window. I watched the birds almost every day; because those swallows always left their nests in October. As a child, I dreamt that they all flew together to a nicer place, where they were not so cold.” — Designed by Eline Claeys from Belgium.

Ghostbusters

Designed by Ricardo Gimenes from Spain.

Spooky Town

Designed by Xenia Latii from Germany.

Hello Autumn

“Did you know that squirrels don’t just eat nuts? They really like to eat fruit, too. Since apples are the seasonal fruit of October, I decided to combine both things into a beautiful image.” — Designed by Erin Troch from Belgium.

Hanlu

“The term ‘Hanlu’ literally translates as ‘Cold Dew.’ The cold dew brings brisk mornings and evenings. Eventually the briskness will turn cold, as winter is coming soon. And chrysanthemum is the iconic flower of Cold Dew.” — Designed by Hong, ZI-Qing from Taiwan.

Discovering The Universe

“Autumn is the best moment for discovering the universe. I am looking for a new galaxy or maybe… a UFO!” — Designed by Verónica Valenzuela from Spain.

King Of The Pirates

Designed by Ricardo Gimenes from Spain.

Goddess Makosh

“At the end of the kolodar, as everything begins to ripen, the village sets out to harvesting. Together with the farmers goes Makosh, the Goddess of fields and crops, ensuring a prosperous harvest. What she gave her life and health all year round is now mature and rich, thus, as a sign of gratitude, the girls bring her bread and wine. The beautiful game of the goddess makes the hard harvest easier, while the song of the farmer permeates the field.” — Designed by PopArt Studio from Serbia.

Game Night And Hot Chocolate

“To me, October is all about cozy evenings with hot chocolate, freshly baked cookies, and a game night with friends or family.” — Designed by Lieselot Geirnaert from Belgium.

Strange October Journey

“October makes the leaves fall to cover the land with lovely auburn colors and brings out all types of weird with them.” — Designed by Mi Ni Studio from Serbia.

Autumn Deer

Designed by Amy Hamilton from Canada.

Dope Code

“October is the month when the weather in Poland starts to get colder, and it gets very rainy, too. You can’t always spend your free time outside, so it’s the perfect opportunity to get some hot coffee and work on your next cool web project!” — Designed by Robert Brodziak from Poland.

Transitions

“To me, October is a transitional month. We gradually slide from summer to autumn. That’s why I chose to use a lot of gradients. I also wanted to work with simple shapes, because I think of October as the ‘back to nature/back to basics month’.” — Designed by Jelle Denturck from Belgium.

Autumn In The Forest

“Autumn is a wonderful time to go for walks in the forest!” — Designed by Hilda Rytteke from Sweden.

Shades Of Gold

“We are about to experience the magical imagery of nature, with all the yellows, ochers, oranges, and reds coming our way this fall. With all the subtle sunrises and the burning sunsets before us, we feel so joyful that we are going to shout it out to the world from the top of the mountains.” — Designed by PopArt Studio from Serbia.

Happy Fall!

“Fall is my favorite season!” — Designed by Thuy Truong from the United States.

Ghostober

Designed by Ricardo Delgado from Mexico City.

First Scarf And The Beach

“When I was little, my parents always took me and my sister for a walk at the beach in Nieuwpoort. We didn't really do those beach walks in the summer but always when the sky started to turn gray and the days became colder. My sister and I always took out our warmest scarfs and played in the sand while my parents walked behind us. I really loved those Saturday or Sunday mornings where we were all together. I think October (when it’s not raining) is the perfect month to go to the beach for ‘uitwaaien’ (to blow out), to walk in the wind and take a break and clear your head, relieve the stress or forget one’s problems.” — Designed by Gwen Bogaert from Belgium.

Turtles In Space

“Finished September, with October comes the month of routines. This year we share it with turtles that explore space.” — Designed by Veronica Valenzuela from Spain.

Roger That Rogue Rover

“The story is a mash-up of retro science fiction and zombie infection. What would happen if a Mars rover came into contact with an unknown Martian material and got infected with a virus? What if it reversed its intended purpose of research and exploration? Instead choosing a life of chaos and evil. What if they all ran rogue on Mars? Would humans ever dare to voyage to the red planet?” Designed by Frank Candamil from the United States.

Summer, Don’t Go!

“It would be nice if we could bring summer back, wouldn’t it?” — Designed by Terezija Katona from Serbia.

Embracing Autumn’s Beauty

“We were inspired by the breathtaking beauty of autumn, with its colorful foliage and the symbolic pumpkin, which epitomizes the season. Incorporating typography allows us to blend aesthetics and functionality, making the calendar not only visually appealing but also useful.” — Designed by WPclerks from India.

A Positive Fall

“October is the month when fall truly begins, and many people feel tired and depressed in this season. The jumping fox wants you to be happy! Also, foxes always have reminded me of fall because of their beautiful fur colors.” — Designed by Elena Sanchez from Spain.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How To Manage Dangerous Actions In User Interfaces]]> https://smashingmagazine.com/2024/09/how-manage-dangerous-actions-user-interfaces/ https://smashingmagazine.com/2024/09/how-manage-dangerous-actions-user-interfaces/ Fri, 27 Sep 2024 15:00:00 GMT By definition, an interface is a layer between the user and a system, serving the purpose of communication between them. Interacting with the interface usually requires users to perform certain actions.

Different actions can lead to various outcomes, some of which might be critical.

While we often need to provide additional protection in case users attempt to perform dangerous or irreversible actions, It’s good to remember that one of the ten usability heuristics called “Error Prevention” says:

“Good error messages are important, but the best designs carefully prevent problems from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.”
What Is A Dangerous Action?

Surprisingly, when we talk about dangerous actions, it doesn’t necessarily mean that something is being deleted.

Here’s an example of a dangerous action from the banking application I use:

The bank approved a loan for me, and as soon as I clicked “Get Money,” it meant that I had signed the necessary documents and accepted the loan. All I have to do is tap the yellow button, and I’ll get the money.

As a result of an accidental tap, you might end up taking a loan when you didn’t intend to, which is why this action can be considered significant and dangerous.

Therefore, a dangerous action does not necessarily mean deleting something.

Some examples may include the following:

  • Sending an email,
  • Placing an order,
  • Publishing a post,
  • Making a bank transaction,
  • Signing a legal document,
  • Permanently blocking a user,
  • Granting or revoking permissions.
Ways To Confirm Dangerous Actions

There are many methods to prevent users from losing their data or taking irreversible actions unintentionally. One approach is to ask users to explicitly confirm their actions.

There are several ways to implement this, each with its own pros and cons.

Modal Dialogs

First of all, we should understand the difference between modal and non-modal dialogs. It’s better to think about modality state since dialogs, popups, alerts — all of these might be presented either in the modal state or not. I will use the term dialogs as a general reference, but the keyword here is modality.

“Modality is a design technique that presents content in a separate, dedicated mode that prevents interaction with the parent view and requires an explicit action to dismiss.”

Apple design guides

Modal dialogs require immediate user action. In other words, you cannot continue working with an application until you respond in some way.

Non-modal dialogs, on the other hand, allow you to keep using the application without interruption. A common example of a non-modal element is a toast message that appears in the corner of the screen and does not require you to do anything to continue using the app.

When used properly, modal dialogs are an effective way to prevent accidental clicks on dangerous actions.

The main problem with them is that if they are used to confirm routine actions (such as marking a task as done), they can cause irritation and create a habit of mindlessly confirming them on autopilot.

However, this is one of the most popular methods. Besides, it can be combined with other methods, so let’s dive into it deeper.

When To Use Them

Use modal dialogs when a user action will have serious consequences, especially if the result of the action is irreversible. Typical cases include deleting a post or project, confirming a transaction, and so on.

It depends on what kind of action users want to take, but the main thing to keep in mind is how serious the consequences are and whether the action is reversible or not.

Things To Keep In Mind

  1. Avoid vague language.
    If you ask users, “Are you sure?” chances are, they will not have any doubts.
  2. In the title, specify what exactly will happen or which entity will be affected (e.g., project name, user name, amount of money).
  3. Provide an icon that indicates that the action is dangerous.
    It both increases the chances that users will not automatically confirm it and is good for accessibility reasons (people with color blindness will notice the icon even if it appears grey to them, signaling its importance).
  4. In the description, be specific and highlight the necessary information.
  5. The CTA button should also contain a word that reflects the action.
    Instead of “Yes” or “Confirm,” use more descriptive options like “Delete,” “Pay $97,” “Make Transaction,” “Send Message,” and so on — including the entity name or amount of money in the button is also helpful. Compare: “Confirm” versus “Pay $97.” The latter is much more specific.

However, this might not be enough.

In some cases, you may require an extra action. A typical solution is to ask users to type something (e.g., a project name) to unblock the CTA button.

Here are a few examples:

ConvertKit asks users to type “DO IT” when removing subscribers.

Pro tip: Note that they placed the buttons on the left side! This is a nice example of applying proximity law. It seems reasonable since the submit button is closer to the form (even if it consists of only one input).

Resend asks users to type “DELETE” if they want to delete an API key, which could have very serious consequences. The API key might be used in many of your apps, and you don’t want to break anything.

This modal is one of the best examples of following the best practices:

  • The title says what the action is (“Delete API Key”).
  • In the text, they mentioned the name of the API Key in bold and in a different color (“Onboarding”).
  • The red label that the action can not be undone makes it clearer that this is a serious action.
  • Extra action is required (typing “DELETE”).
  • The CTA button has both a color indicator (red usually is used for destructive actions) and a proper label — “Delete API Key”. Not a general word, e.g., “Confirm” or “Delete.”

Notice that Resend also places buttons on the left side, just as ConvertKit does.

Note: While generally disabling submit buttons is considered bad practice, this is one of the cases where it is acceptable. The dialog’s request is clear and straightforward both in ConvertKit and Resend examples.

Moreover, we can even skip the submit button altogether. This applies to cases where users are asked to input an OTP, PIN, or 2FA code. For example, the bank app I use does not even have a log in button.

On the one hand, we still ask users to perform an extra action (input the code). On the other hand, it eliminates the need for an additional click.

Accessibility Concerns

There is ongoing debate about whether or not to include a submit button when entering a simple OTP. By “simple,” I mean one that consists of 4-6 digits.

While I am not an accessibility expert, I don’t see any major downsides to omitting the submit button in straightforward cases like this.

First, the OTP step is typically an intermediate part of the user flow, meaning a form with four inputs appears during some process. The first input is automatically focused, and users can navigate through them using the Tab key.

The key point is that, due to the small amount of information required (four digits), it is generally acceptable to auto-submit the form as soon as the digits are entered, even if a mistake is made.

On the one hand, if we care about accessibility, nothing stops us from providing users control over the inputs. On the other hand, auto-submission streamlines the process in most cases, and in the rare event of an error, the user can easily re-enter the digits.

Danger Zones

For the most critical actions, you may use the so-called “Danger zone” pattern.

A common way to implement this is to either have a dedicated page or place the set of actions at the bottom of the settings/account page.

It might contain one or more actions and is usually combined with other methods, e.g., a modal dialog. The more actions you have, the more likely you’ll need a dedicated page.

When To Use Them

Use a Danger Zone to group actions that are irreversible or have a high potential for data loss or significant outcomes for users.

These actions typically include things like account deletion, data wiping, or permission changes that could affect the user’s access or data.

Things To Keep In Mind

  1. Use colors like red, warning icons, or borders to visually differentiate the Danger Zone from the rest of the page.
  2. Each action in the Danger Zone should have a clear description of what will happen if the user proceeds so that users understand the potential consequences.
  3. Ask users for extra effort. Usually, the actions are irreversible and critical. In this case, you may ask users to repeat their password or use 2FA because if someone else gets access to the page, it will not be that easy to do the harmful action.
  4. Keep only truly critical actions there. Avoid making a danger zone for the sake of having one.

Inline Guards

Recently, I discovered that some apps have started using inline confirmation. This means that when you click on a dangerous action, it changes its label and asks you to click again.

This pattern is used by apps like Zapier and Typefully. While at first it seems convenient, it has sparked a lot of discussion and questions on X and Linkedin.

I’ve seen attempts to try to fix accidental double-clicking by changing the position of the inline confirmation label that appears after the first click.

But this creates layout shifts. When users work with the app daily, it may cause more irritation than help.

As an option, we can solve this issue by adding a tiny delay, e.g., 100-200ms, to prevent double-clicking.

It also matters who your users are. Remember the good old days when we used to click a dozen times to launch Internet Explorer and ended up with dozens of open instances?

If your target audience is likely to do this, apparently, the pattern will not work.

However, for apps like Zapier or Typefully, my assumption is that the target audience might benefit from the pattern.

Two-factor Authorization Confirmation

This method involves sending a confirmation request, with or without some kind of verification code, to another place, such as:

  • SMS,
  • Email,
  • Authenticator app on mobile,
  • Push notifications (e.g., instead of sending SMS, you may choose to send push notifications),
  • Messengers.

Notice: I’m not talking about authentication (namely, login process), but rather a confirmation action.

An example that I personally face a lot is an app for sending cryptocurrency. Since this is a sensitive request, apart from submitting the requisition from a website, I should also approve it via email.

When To Use It

It can be used for such operations as money transfers, ownership transfers, and account deletion (even if you have a danger zone). Most of us use this method quite often when we pay online, and our banks send us OTP (one-time password or one-time code).

It may go after the first initial protection method, e.g., a confirmation dialog.

As you can see, the methods are often combined and used together. We should not consider each of them in isolation but rather in the context of the whole business process.

Passkeys

Passkeys are a modern, password-less authentication method designed to enhance both security and user experience.

“Passkeys are a replacement for passwords. A password is something that can be remembered and typed, and a passkey is a secret stored on one’s devices, unlocked with biometrics.”

passkeys.dev

There are a few pros of using passkeys over 2FA, both in terms of security and UX:

  1. Unlike 2FA, which typically requires entering a code from another device or app (e.g., SMS or authenticator apps), passkeys streamline the confirmation process. They don’t require switching between devices or waiting for a code to arrive, providing immediate authentication.
  2. While 2FA provides extra protection, it is vulnerable to phishing, SIM-swapping, or interception. Passkeys are much more resistant to such attacks because they use public-private key cryptography. This means no secret code is ever sent over the network, making it phishing-resistant and not reliant on SMS or email, which can be compromised.
  3. Passkeys require less mental effort from users. There’s no need to remember a password or type a code — just authenticate with a fingerprint, facial recognition, or device-specific PIN. This way, we reduce cognitive load.
  4. With passkeys, the authentication process is almost instant. Unlike 2FA, where users might have to wait for a code or switch to another device, passkeys give us the opportunity to confirm actions without switching context, e.g., opening your email inbox or copying OTP from a mobile device.

The passkeys are widely supported and more and more companies adopt it.

Second-person Confirmation

This is a mechanism when two users are involved in the process. We may call them initiator and approver.

In this case, the initiator makes a request to take some action while the approver decides whether to confirm it or not.

In both roles, a confirmation dialog or other UI patterns may be used. However, the main idea is to separate responsibilities and decrease the probability of a bad decision.

Actually, you have likely encountered this method many times before. For example, a developer submits a pull request, while a code reviewer decides whether to confirm it or decline.

When To Use It

It is best suited for situations when the seriousness of decisions requires few people involved.

There is a direct analogy from real life. Take a look at the picture below:

The Council of Physicians reminds us that in medicine, seeking a second opinion is crucial, as collaboration and diverse perspectives often result in more informed decisions and better patient care. This is a perfect example of when a second opinion or an approver is essential.

Here, you will find some apps that use this method:

  • GitHub, as previously mentioned, for merging pull requests.
  • Jira and other similar apps. For example, when you move issues through a certain workflow stage, it may require manager approval.
  • Banking applications. When you make a high-value transaction, it could be necessary to verify it for legal issues.
  • Deel, which is a global hiring and payroll. One part (e.g., employer) draws up a contract and sends it to another part (e.g., freelancer), and the freelancer accepts it.

But here is the thing: We can consider it a separate method or rather an approach for implementing business logic because even if another person confirms an action, it is still a dangerous action, with the only difference being that now it’s another person who should approve it.

So, all of the examples mentioned above are not exactly a standalone specific way to protect users from making wrong decisions from the UI point of view. It’s rather an approach that helps us to reduce the number of critical mistakes.

Do We Actually Need To Ask Users?

When you ask users to take action, you should be aware of its original purpose.

The fact that users make actions does not mean that they make them consciously.

There are many behavioral phenomena that come from psychology, to name a few:

  • Cognitive inertia: The tendency of a person to stick to familiar decisions, even if they are not suitable for the current situation. For instance, the vast majority of people don’t read user agreements. They simply agree with the lengthy text because it’s necessary from the legal point of view.
  • Availability Heuristic: People often make decisions based on information that is easily accessible or familiar to them rather than making a mental effort. When users see the same confirmation popups, they might automatically accept them based on their previous successful experience. Of course, sooner or later, it might not work, and the acceptance of required action can lead to bad consequences.
  • Cognitive Miser: The human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. This explains why many users just click “yes” or “agree” without carefully reading the text.
  • Quite a representative example is banner blindness, even though not related to confirmation but, in fact, revolves around the same human behavior idiosyncrasies.

A reasonable question that may arise: What are the alternatives?

Even though we cannot entirely affect users’ behavior, there are a few tactics we can use.

Delaying

In some scenarios, we can artificially delay the task execution in a graceful way.

One of my favorite examples is an app called Glovo, which is a food delivery app. Let’s have a look at the three screens you will see when you order something.

The first screen is a cart with items you chose to buy (and an annoying promotion of subscription that takes ⅓ of the screen).

After you tap the “confirm order” button, you’ll see the second screen, which asks you whether everything is correct. However, the information appears gradually with fade-in animation. Also, you can see there is a progress bar, which is a fake one.

After a few seconds, you’ll see another screen that shows that the app is trying to charge your card; this time, it’s a real process. After the transaction proceeds, you’ll see the status of the order and approximate delivery time.

Pro tip: When you show the status of the order and visually highlight or animate the first step, it makes users more confident that the order will be completed. Because of the trick that is called Goal-Gradient Effect.

You’ve just paid, and “something starts happening” (at least visually), which is a sign that “Oh, they should have already started preparing my order. That’s nice!”

The purpose of the screen with a fake progress bar is to let users verify the order details and confirm them.

But this is done in a very exquisite way:

  1. On the first screen, you click “confirm order”. It doesn’t invoke any modals or popups, such as “Are you sure?”.
  2. On the second screen, users can see how information about their order appears right away, and the scroll bar at the bottom goes further. It seems like that app is doing something, but it’s an illusion. An illusion that makes you take another quick look at what you’ve just ordered.

In the previous version of the app, you couldn’t even skip the process; you could only cancel it. Now they added the “Continue” button, which is essentially “Yes, I’m sure” confirmation.

This means that we return back again to the drawbacks of classic confirmation modals since users can skip the process. But the approach is different: it’s a combination of a feedback loop from the app and skipping the process.

This combination makes users pay attention to the address, order, and price at least sometimes, and it gives them time to cancel the order, while in the classic approach, the confirmation is “yes or no?” which is more likely to be confirmed right away.

The Undo Option

The undo pattern allows users to reverse an action they have just performed, providing a safety net that reduces anxiety around making mistakes.

Unlike confirmation modals that interrupt the workflow to ask for user confirmation, the undo pattern provides a smoother experience by allowing actions to be completed with the option to reverse them if needed.

When To Use It

It works perfectly fine for non-destructive, reversible actions &mdashl actions that don’t have significant and immediate consequences:

  • Reversing actions when editing a document (The beloved ctrl + z shortcut);
  • Removing a file (if it goes to the trash bin first);
  • Changing the status of a task (e.g., if you accidentally marked a task completed);
  • Deleting a message in a chat;
  • Applying filters to a photo.

Combined with a timer, you can extend the number of options since such tasks as sending an email or making a money transfer could be undone.

When You Cannot Use It

It’s not suitable for actions that have serious consequences, such as the following:

  • Deleting an account;
  • Submitting legal documents;
  • Purchasing goods (refund is not the same as the undo option);
  • Making requests for third-party APIs (in most cases).

How To Implement Them?

  1. The most common way that most people use every day is to provide a shortcut (ctrl + z). However, it’s constrained to some cases, such as text editors, moving files between folders, and so on.
  2. Toasts are probably the most common way to implement these web and mobile apps. The only thing that you should keep in mind is that it should stand out enough to be noticed. Hiding them in a corner with a tiny message and color that is not noticeable might not work — especially on wide screens.
  3. A straightforward solution is simply to have a button that does the undo option. Preferably close to the button that evokes the action that you want to undo.

The undo option is tightly related to the concept called soft deleting, which is widely used in backend frameworks such as Laravel.

The concept means that when users delete something via the UI, it looks like it has been deleted, but in the database, we keep the data but mark it as deleted. The data is not lost, which is why the undo option is possible since we don’t actually delete anything but rather mark it as deleted.

This is a good technique to ensure that data is never lost. However, not every table needs this.

For example, if you delete an account and don't want users to restore it (perhaps due to legal regulations), then you should erase the data completely. But in many cases, it might be a good idea to consider soft deleting. In the worst case, you’ll be able to manually restore user data if it cannot be done via the UI for some reason.

Conclusion

There’s something I want everyone to keep in mind, regardless of who you are or what you do.

Every situation is unique. A certain approach might work or fail for a variety of reasons. You might sometimes wonder why a specific decision was made, but you may not realize how many times the interface was revised based on real user feedback.

User behavior is affected by many factors, including country, age, culture, education, familiarity with certain patterns, disabilities, and more.

What’s crucial is to stay in control of your data and users and be prepared to respond when something goes wrong. Following best practices is important, but you must still verify if they work in your specific case.

Just like in chess, there are many rules — and even more exceptions.

Further Reading

]]>
hello@smashingmagazine.com (Victor Ponamariov)
<![CDATA[The Timeless Power Of Spreadsheets]]> https://smashingmagazine.com/2024/09/timeless-power-of-spreadsheets/ https://smashingmagazine.com/2024/09/timeless-power-of-spreadsheets/ Mon, 23 Sep 2024 09:00:00 GMT Part of me can’t believe I’m writing this article. Applying the insights of Leonardo da Vinci or Saul Bass to web design is more my groove, but sometimes you simply have to write about spreadsheets. You have to advocate for them. Because someone should.

In a checkered career spanning copywriting, journalism, engineering, and teaching, I’ve seen time and time again how powerful and useful spreadsheets are in all walks of life. The cold, hard truth is that you — yes, you — likely have an enormous amount to gain by understanding how spreadsheets work. And, more importantly, how they can work for you.

That’s what this piece is about. It’s a rallying cry, with examples of spreadsheets’ myriad uses and how they can actually, in the right circumstances, be the bedrock of altogether inspiring, lovely things.

Cellular Organisms

Spreadsheets have been around for thousands of years. Papyrus remnants have been discovered from as far back as 4,600 BC. Their going digital in the late ‘70s was a major factor in the rise of personal computing. Much is (rightly) made of the cultural transformation brought about by the printing press. The digital spreadsheet, not so much.

For as long as people have had projects and data to organize, spreadsheets have been indispensable. They were the original databases.

Spreadsheets don’t always get a lot of attention these days. For organization and workflow, we usually find ourselves in the worlds of Trello, Jira, or GitHub Projects. Datasets live in Oracle, MongoDB, and the like. There are good reasons for these services emerging — everything has its place — but

I do get the sense that specialized tooling causes us to skip over the flexibility and power that today’s spreadsheet editors provide.

This is especially true for smaller projects and ones in their early stages. Yes, sometimes only a huge database will do, but often spreadsheets are more than fit for purpose.

Benefits

What makes spreadsheets so great? We’ll get into a few real-world examples in a second, but several qualities hold true. They include the following:

  • Collaboration
    Cloud-based editors like Google Sheets give groups of people a space in which to collaborate on data. They can serve as a middle ground for people working on different parts of the same project.
  • Structure
    It’s inherent to spreadsheets that they’ll get you thinking about the ‘shape’ of the information you’re dealing with. In the same way that a blank piece of paper invites fluidity of thought, tables coax out frameworks — and both have their place
  • Flexibility
    Spreadsheets can evolve in real time, which is especially useful during the formative stages of a project when the shape of the ‘data’ is still being established. Adding a field is as simple as naming a column, and the ability to weave in formulas makes it easy to infer other values from the ones you have. With stuff like the Google Sheets API, you can even scrape data directly from the spreadsheet
  • Power
    You’d be surprised how much you can do in spreadsheets. Sometimes, you don’t even need bespoke dashboards; you can do it all in the editor. From data visualization to pivot tables, spreadsheet editors come with a bunch of powerful out-of-the-box features.
  • They translate into other data formats
    Spreadsheets are one small jump from the mighty CSV. When the time is right, spreadsheets can still become raw data if you want them to.

Such is the flexibility and power of spreadsheets, and what’s listed here is scratching the surface. Their fundamental strength of organizing data has made them useful for thousands of years, while contemporary enhancements have taken them to the next level.

Case Studies

Below are a few examples from my own experiences that showcase these benefits in the real world. They’re obviously slanted towards my interests, but hopefully, they illustrate the usefulness of spreadsheets in different contexts.

Galaxies (Of The Guardian)

I work as a software engineer at Guardian News & Media, a place where 10% of the time, i.e., one work day every two weeks, is yours to spend on independent learning, side projects, and so on, is part of the working culture. An ongoing project of mine has been Galaxies (of the Guardian), a D3-powered org chart that represents departments as a series of interrelated people, teams, and streams.

What you see above is powered by information stored and edited in spreadsheets. A lambda scraps departmental information using the aforementioned Google Sheets API, then reformats into a shape Galaxies plays nicely with.

This approach has had several benefits. The earliest iterations of Galaxies were only possible because there was already a spreadsheet being maintained by those who needed to keep track of who worked where. Techies and non-techies alike are able to update information easily, and it is transparent to anyone who works inside the organization.

For anyone interested, I wrote a piece about how Galaxies works on the Guardian engineering blog. Suffice it to say here, spreadsheets were — and remain — the engine of the whole thing.

Food Bank Britain

My background is in journalism, and I still freelance in my own time. As my coding skills have improved, I’ve naturally gravitated towards data journalism, even teaching it for a year at my old journalism school.

Spreadsheets are inseparable from a lot of modern journalism — and, indeed, copyrighting in general. The digital world is awash with data, and good luck making sense of it without a working knowledge of spreadsheets.

For example, a piece I wrote for the Byline Times about foodbanks earlier this year simply wouldn’t have been possible without spreadsheets. It was by collating data from the Trussell Trust, the Independent Food Aid Network, and national census reports that I was able to map out the sheer scale of the UK’s food bank network.

Granted, the map is more visually engaging. But then that’s the idea. It’s the same information, just presented more pointedly.

There are plenty of other instances of spreadsheets being instrumental at the Guardian alone. Typerighter, the newspaper’s automated house style checker, began life as a subeditor’s spreadsheet. User research and bug tracking for the new Feast cooking app, which I worked on during its formative stages, was tracked and discussed in spreadsheets.

And, of course, countless pieces of quality journalism at the Guardian and beyond continue to be powered by them.

Another Cell In The Table

If this piece has got you to at least consider learning more about spreadsheets and spreadsheet editors, you’re in luck. There are countless free learning resources available on the web. Here are a few excellent beginner videos to help you on your way:

As for spreadsheet editors, the big three these days are probably Google Sheets, Microsoft Excel, and LibreOffice Calc (for the open source devotees out there). They all work much the same way. And as you get comfortable with their functionality, new avenues will open.

Data is the lifeblood of the modern web, and spreadsheets remain one of the most accessible, flexible ways to organize, analyze, and share it. As I hope the examples I’ve shared with you show, spreadsheets aren’t inherently boring. They can be, but when used in the right ways, they become the engines of dynamic, impactful work.

The way they go is up to you.

]]>
hello@smashingmagazine.com (Frederick O’Brien)
<![CDATA[Embracing Introversion In UX]]> https://smashingmagazine.com/2024/09/embracing-introversion-in-ux/ https://smashingmagazine.com/2024/09/embracing-introversion-in-ux/ Thu, 19 Sep 2024 15:00:00 GMT I place myself firmly in the category of being an introvert when it comes to my role as a UX researcher. I love the process of planning and executing research. I have never felt a need to be the loudest or most talkative person in a meeting. I contribute after I have developed something worth saying (or have a really bad joke worked up).

I also love interviews and usability testing, where I interact with users and engage in meaningful conversation. And then I am exhausted. I love speaking about the findings of research and sharing the spotlight with my colleagues during a presentation, and then I want to go to bed underneath the conference room table. I facilitate workshops with ease but have trouble mustering up the energy required to attend what often feels like mandatory post-workshop socializing.

In truth, I have sometimes felt introverted tendencies set me back at work, particularly as a consultant who needs to build relationships to keep the work flowing (in theory). An example would be getting called out by a manager in my junior days for not engaging in as many networking activities as I could have been with some of our clients. My defense of feeling overstimulated, overwhelmed, and uninterested in socializing fell on deaf ears.

I think we have grown in our understanding of introverts and what they need to be high performers, particularly since Susan Cain’s 2013 best-selling book Quiet: The Power of Introverts in a World That Can’t Stop Talking was released.

This article aims to celebrate the power of introversion in UX research and design. We’ll debunk common misconceptions, explore the unique strengths introverted researchers and designers bring to the table, and offer practical tips for thriving in a field that sometimes seems tailored for extroverts. My goal is to build on some of the work on UX and introversion that already exists. I’ve cited other articles where appropriate and shared the resources I’ve found on UX and introversion at the end of this article.

Introversion is not the same thing as being shy, just as extroversion isn’t the same thing as being brash. For simplicity and the sake of this article, I am going to use the following definitions provided by de Jongh & de la Croix:

“Extroverts get energy from interaction with others and like to share ideas with others to help develop their thinking, whereas introverts need to recharge on their own after much social contact and prefer to share ideas only when they are fully formed.”

There are many potential reasons one could have introvert or extrovert tendencies (McCulloch 2020), and these come on a scale where one might lean or introvert or extrovert depending on the occasion. Those who straddle the middle ground of introversion and extroversion are considered ambiverts.

As Jonathan Walter notes in a series of articles on introverts and UX, many UX professionals find themselves drawn to the field because of their introverted nature. Introversion, often misunderstood as shyness or social awkwardness, is simply a preference for internal reflection and processing. It’s about drawing energy from solitude and finding fulfillment in deep thought and meaningful connections.

As UX is clearly a space where introverts are drawn, there is already a decent amount of literature aimed at introverted UX practitioners. In writing this article, I wanted to differentiate from what is already out there, as well as extend.

I wanted to include some personal stories of introverts who aren’t myself and work in UX. To do this, I went to LinkedIn and asked people to send me personal anecdotes. My post, at least by my standards, was well received, with over 100 reactions and a dozen people sending me direct messages sharing anecdotes. I was even introduced to Tim Yeo, who has recently released a book on introverts in the workplace. I’ll be sharing some of the stories people shared with me over LinkedIn, where appropriate (and with their permission), throughout this article to help draw the connections to real life.

First, let’s talk a little about what we know about measuring if you (or others) are introverted, extroverted, or in between.

Measuring Introversion & Extroversion: Self-Assessment Tools

Understanding where you and your team members fall on the introversion-extroversion spectrum can be invaluable for tailoring your approach to work, collaboration, and personal development. Reinoud de Jongh and Anne de la Croix, two medical school professors, write that medical educators should know where they fall on the introversion — extroversion spectrum to deliver great teaching experiences. I’d extend this to UX practitioners, including UX managers, UX researchers, and designers. If we collaborate with others, we will benefit from knowing where we fall on this scale.

While there’s no single definitive test, here are a few simple and accessible tools that can offer insights:

  1. Online Quizzes: Numerous online quizzes and assessments are available, often based on established personality frameworks like the Myers-Briggs Type Indicator (MBTI) or the Big Five personality traits. These quizzes can provide a general sense of your tendencies and preferences. Popular options include:
    • 16Personalities: Offers a free, comprehensive assessment based on the MBTI.
    • Truity: Provides a variety of personality tests, including the Big Five and Enneagram.
    • Verywell Mind: Offers a quiz specifically focused on introversion and extroversion.
  2. Reflection and Journaling: Take some time to reflect on your daily experiences and interactions. Ask yourself the following questions:
    • What activities energize me vs. drain me?
    • Do I prefer to work alone or in groups?
    • How do I recharge after a long day?
    • Do I prefer deep conversations with a few people or socializing with a larger group?
  3. Observation: Pay attention to your behavior and reactions in different social settings. Notice what triggers your stress response and what environments make you feel most comfortable and productive.
  4. Professional Assessment: If you’re seeking a more in-depth analysis, consider consulting a career coach or psychologist who specializes in personality assessment. They can administer standardized tests and provide personalized feedback and guidance.
    • Multidimensional Introversion-Extroversion Scales (MIES): This scale specifically focuses on the multifaceted nature of introversion and extroversion. It measures several sub-traits associated with each dimension, such as social engagement, assertiveness, enjoyment of social interaction, and preference for solitude. Professional psychologists often reference this test, which can be accessed freely here, but might be best done with the guidance of a professional.

There’s no right or wrong answer when it comes to introversion or extroversion. You might even find some folks are ambiverts who display different personalities in different settings. You can’t force your teammates to take these types of tests. But if you are able to get buy-in, it can be a fun activity to see who considers themselves more introverted or more extroverted. The goal is to understand your own preferences and tendencies and those of your colleagues so you can create a work environment that supports your well-being and maximizes your potential.

Introverts’ Super Powers

The idea that UX is an extrovert’s game couldn’t be further from the truth. As Jeremy Bird notes in his article on the strengths of introverts in design, it’s a field that demands a wide range of skills, including deep listening, empathy, observation, analysis, and creativity — all of which introverts excel at. With so much information already available from articles on UX and introversion noted in the biography below, I’m going to briefly highlight the commonly accepted strengths of introverts.

Deep Listening

Introverts are often exceptional listeners. In user interviews, they give participants the space to fully express their thoughts and feelings, picking up on subtle cues and nuances that others might miss. This active listening leads to a deeper understanding of user needs and motivations, which is crucial for both research and design.

One practitioner shared their experience on LinkedIn:

“In a nutshell, being introverted gives a natural advantage in giving the user space to tell their story. I’m more likely to embrace pauses that others may feel are awkward, but this allows users to either double down on their point or think of another point to add (“lightbulb” moment).”

— Dominique S. Microsoft User Research via LinkedIn

Empathy

Many introverts possess a high degree of empathy. They can easily put themselves in users’ shoes, feeling their frustrations and celebrating their successes. This empathy fuels user-centered design, ensuring that products and services are not only functional but also emotionally resonant.

Observational Skills

Introverts are naturally observant. They notice details in user behavior, interface interactions, and environmental context that others might overlook.

Thoughtful Analysis

Introverts often prefer to process information internally, engaging in deep, solitary reflection before sharing their insights. This leads to well-considered and insightful findings and well-crafted data-informed design.

Independent Work

Introverts tend to thrive in independent work environments. As Heather McCulloch notes, teachers should allow introverted students to work independently or in pairs. This way, they can focus deeply on research tasks, design problems, or data analysis without the distractions that come with constant collaboration.

Now that we’ve covered the commonly recognized strengths introverts bring to the table, let’s cover some common hurdles and explore effective strategies for overcoming them that empower introverts to thrive.

Potential Challenges (And How To Overcome Them)

Being introverted can bring up some challenges when it comes to doing things that require a lot of social energy. However, many introverts in UX find ways to push beyond their natural tendencies to meet the demands of their profession. One UX practitioner shared their experience on LinkedIn:

“I’ve been extremely introverted all my life, but have always been able to proceed beyond my introverted boundaries because of a commitment to (perceived) duty. My passion for synergizing user needs, business needs, and the assorted bevy of constraints that arise helps me downplay and overlook any challenges arising from my tendency to be withdrawn.”

— Darren H. MS UXD via LinkedIn

Networking

Introverts might initially feel overwhelmed in networking situations or workshops due to the continual social interaction and the need to navigate unfamiliar environments and interact with new people, which can be particularly daunting for those who prefer solitude or small-group conversations.

  • Researchers & Designers: Building professional relationships can be challenging for introverts. Large conferences or networking events can feel overwhelming. Small talk can feel forced and inauthentic.
    • Solutions for researchers and designers:
      • Focus on quality over quantity: Instead of trying to meet as many people as possible, focus on building a few meaningful connections.
      • Utilize online communities: Connect with other UX professionals on platforms like LinkedIn or Twitter. Engage in discussions, share your insights, and build relationships virtually before meeting in person.
      • Attend smaller events: Look for niche conferences or meetups focused on specific areas of interest. These tend to be more intimate and less overwhelming than large-scale events.
      • Leverage existing relationships: Don’t be afraid to ask a colleague or mentor to introduce you to someone new.

Presenting Work and Public Speaking

Introverts may initially avoid presenting because they tend to prefer avoiding the spotlight. They may also worry about being judged or scrutinized by others.

  • Researchers: May feel anxious about presenting research findings to stakeholders, especially if they have to do so in front of a large audience.
  • Designers: Can struggle with pitching design concepts or justifying their decisions to clients or colleagues, fearing criticism or pushback.

For the introvert, you might not like this, but you need to get comfortable presenting, and the sooner you do, the better.

Solutions for researchers and designers:

  • Practice, practice, practice
    The more you rehearse your presentation or pitch, the more comfortable you’ll feel. Practice in front of a mirror, record yourself or ask a trusted friend for feedback.
  • Use visual aids
    Slides, mockups, or prototypes can help you illustrate your points and keep your audience engaged.
  • Focus on clear communication
    Structure your presentation logically, use simple language, and avoid jargon. Speak slowly and confidently, and make eye contact with your audience.
  • Build confidence over time
    Start with small presentations or informal feedback sessions. As you gain experience and positive feedback, your confidence will naturally increase.

I’ve personally found presenting in front of a large anonymous crowd to be less intimidating than smaller, intimate meetings where you might know a few people mixed in with a few strangers. In the end, I always remind myself I am supposed to be the expert on what I’ve been asked to present or that my job is to clearly state the outcome of our research to stakeholders hungry to see the relevance of their work. The audience wants to support you and see you succeed. I take confidence in that. I’m also exhausted after giving a presentation where I’ve left it all on the floor.

Now, let’s move on to topics beyond what I’ve found covered in existing articles on UX and introversion and cover workshop facilitation and managing group dynamics.

Managing Group Dynamics

Introverts may find group dynamics challenging, as they often prefer solitary activities and may feel overwhelmed or drained by social interactions. In group settings, introverts may have difficulty asserting themselves, sharing their ideas, or actively participating in discussions. They may also feel uncomfortable being the center of attention or having to make decisions on the spot.

Additionally, introverts may struggle to build relationships with their peers in a group setting, as they may be hesitant to initiate conversations or join in on group activities. These challenges can make it difficult for introverts to fully engage and contribute in group settings, leading to feelings of isolation and exclusion.

One UX designer responding over LinkedIn eloquently shared their experience with communication challenges:

“Introversion can sometimes create challenges in communication, as my thoughtful nature can be misinterpreted as shyness or disinterest. To step out of my shell, I need to build trust with those around me before I can feel truly comfortable. However, I don’t see this as the worst thing in the world. Instead, I view it as an opportunity to identify areas where I need to improve and learn to advocate for myself more effectively in the future. In embracing both the strengths and challenges of being an introvert, I’ve found that my introverted nature not only enhances my work as a designer but also drives continuous personal and professional growth, ultimately leading to better outcomes for both myself and my team.”

— Arafa A. via LinkedIn
  • Challenge: Large groups can be overwhelming, and introverted facilitators might find it difficult to assert control or manage dominant personalities who may derail the discussion.
  • Solutions:
    • Clear Ground Rules: Establish explicit ground rules at the beginning of the workshop to ensure respectful communication and equal participation.
    • Assertive Communication: Practice techniques like “broken record” or “fogging” to politely but firmly redirect the conversation when necessary.
    • Partner with a Co-Facilitator: Collaborate with an extroverted colleague who can complement your strengths. They can take the lead in managing group dynamics and energizing participants.

Managing group dynamics covers a broad number of situations UX professionals face on a daily basis. Let’s get a little more specific and focus on how introverted UXers can thrive as workshop facilitators.

Facilitating Workshops

If you’re an introverted UX professional who shies away from leading workshops, it’s time to reconsider. Here are some of the reasons introverts can be perfect workshop facilitators:

  1. Preparation:
    • Introverts tend to be meticulous planners. We thrive on preparation and often go above and beyond to ensure a workshop is well-structured, organized, and aligned with learning objectives. This thoroughness translates to a smooth, well-paced session that instills confidence in participants.
  2. Thoughtful Facilitation:
    • Introverts are known for their active listening skills. We genuinely want to hear what others have to say and create a safe space for diverse perspectives to emerge. We ask thoughtful questions, encourage reflection, and facilitate meaningful discussions that lead to deeper understanding.
  3. Empathy & Connection: We’ve already discussed in the section on superpowers how introverts excel at empathy and connection.
  4. Observation Skills: We’ve already discussed in the section on superpowers how introverts excel at observational skills.
  5. Comfort with Silence:
    • Introverts understand the power of silence. We’re not afraid to pause and allow reflection after asking a question or during a brainstorming session. This creates space for deeper thinking and prevents premature conclusions or groupthink.

We’ve reviewed many of the challenges introverts might face in their daily work life. Let’s turn our attention to a more recent phenomenon, at least in terms of its widespread availability as an option for many UX professionals: remote work.

Working Remotely

Increased telecommuting offers a unique opportunity for some introverts. Introverts, who often find comfort in solitude and derive energy from spending time alone, sometimes find the constant socialization and bustle of open-plan offices overwhelming and draining.

Remote work provides introverts with an opportunity to control their surroundings and create a workspace that promotes focus, productivity, and creativity. Remote work allows introverts to communicate and collaborate on their own terms. Introverts often prefer one-on-one interactions over large group meetings, and remote work makes it easier for them to engage in meaningful conversations with colleagues and clients.

Potential Challenges For Introverts Working Remotely

While remote work has been a game-changer for many introverts, it is important to acknowledge that it is not without its challenges. Introverts may miss the camaraderie and social connections of an in-person workplace, and they may need to make a conscious effort to stay connected with colleagues and maintain a healthy work-life balance.

Introverts working remotely may need to develop strategies for self-advocacy and communication to ensure that their voices are heard and their contributions are valued in a virtual work environment.

  • Isolation and Disconnect: The lack of face-to-face interaction can lead to feelings of isolation and detachment from the team.
  • Communication Barriers: Virtual communication can be less nuanced, making it harder to convey complex ideas or build rapport with colleagues.
  • Meeting Overload: Excessive video calls can be exhausting for introverts, leading to burnout and decreased productivity.
  • Limited Non-Verbal Cues: Virtual interactions lack the subtle body language and facial expressions that introverts rely on to understand others’ perspectives.

Overcoming Challenges: Strategies For Introverts Working Remotely

Introverted remote employees can implement some of these strategies and tactics to enhance their productivity, reduce burnout, and maintain a positive work environment:

  • Proactive Communication: Initiate regular check-ins with colleagues and managers, both for work-related updates and casual conversations.
  • Schedule Breaks: During long virtual meetings, take short breaks to recharge and refocus.
  • Advocate for Your Needs: If you’re feeling overwhelmed by meetings or social interactions, don’t hesitate to speak up and suggest alternatives, such as asynchronous communication or smaller group discussions.
  • Build Virtual Relationships: Participate in virtual social events, share personal anecdotes in team channels, and find opportunities to connect with colleagues on a personal level.
  • Embrace Video Calls (Strategically): While video calls can be tiring, they can also be valuable for building rapport and understanding non-verbal cues. Use them strategically for important discussions or when you need to connect with a colleague on a deeper level.

Implementing what we’ve covered in this section will help to reduce the likelihood of frustration from both remote working introverts and their colleagues.

Tips For Introverted UX Researchers And Designers

We’ve covered a lot of ideas in this article. If you find yourself nodding along as an introvert or perhaps coming to the realization you or someone on your team is more introverted, this section and the next will end this article on a high note, introducing some actionable tips for introverted researchers and designers, and their managers and teammates, to create a more comfortable and successful working environment for introverts to thrive alongside their extroverted colleagues.

Self-Care

Everyone needs to engage in an appropriate amount of self-care to feel their best. For an introvert, this is often done in solitude, particularly after engaging in a day or week full of social interaction. Some tips that could apply to anyone but are of particular relevance to introverts include the following:

  • Schedule downtime: Block out time in your calendar for quiet reflection and recharging after meetings or social interactions. This could be a walk in nature, reading a book, or simply sitting in silence.
  • Honor your energy levels: Pay attention to when you’re feeling drained. Don’t be afraid to decline invitations or reschedule meetings if you need time to recharge.
  • Create a calming workspace: Surround yourself with things that promote relaxation and focus, such as plants, calming music, or inspiring artwork.

Play To Your Strengths

Introverts know themselves best and have spent a lifetime reflecting on who they are and what makes them wake up happy to go to work. As such, introverts may have a high awareness of their strengths. This allows an introvert to do the following:

  • Identify your unique talents: Are you a meticulous researcher, a creative problem-solver, or a passionate user advocate? Focus on tasks and projects that align with your strengths.
  • Communicate your preferences: Let your manager or team know what type of work you thrive in. Perhaps you prefer to work independently on research tasks or focus on the visual aspects of design.
  • Build on your skills: Seek opportunities to develop your existing skills and acquire new ones. This could involve taking online courses, attending workshops, or seeking mentorship from experienced researchers and designers.

Communication

Introverts might hesitate to speak up when the room is crowded with unknown future friends. However, anyone, introverted or not, needs to be their own best advocate when it comes to making colleagues and management aware of how to create the best workplace environment to thrive in:

  • Advocate for your needs: Don’t be afraid to speak up and ask for what you need to succeed. This could involve requesting a quiet workspace, suggesting alternative meeting formats, or simply letting your team know when you need some time to yourself.
  • Develop your communication skills: Even though you may prefer written communication or one-on-one conversations, it’s important to be able to communicate effectively in various settings. Practice public speaking, participate in team discussions, and learn to articulate your ideas clearly and confidently.

It’s essential for introverts to advocate for their needs and communicate their preferred work styles to their colleagues and managers. One UX professional shared their experience on LinkedIn:

“I do my best work when I have time to think and prepare vs. on-demand thinking, speaking, & decision making. So, I ask for agendas, context, and pre-reads to help me make the most impact in meetings. When I shared this fact, it really helped my outgoing teammates, who never thought that others might operate differently than they do. I got feedback that this was a learning experience for them, and so I have continued to share this fact with new teammates to set expectations and advocate for myself since I find it to be an extrovert-centered business world.”

— Anonymous UXer on LinkedIn

Another LinkedIn UXer provided additional tactics for how they navigate communication styles and expectations, particularly in a fast-paced or extrovert-dominated environment.

“The longer I work with people in a creative capacity, the more I recognize the power of delay. Plenty of introverts are also high-achieving people pleasers (raises hand 🙋🏻). This has caused stress over the years when working with extroverts or verbal processors because there can be a perceived sense of urgency to every thought or ask.
[slowing things down] can look like using certain phrases to help slow down the implied urgency to allow me to more thoughtfully process the ask:
  • “Ah, interesting! Could you say more about that?”
  • “Can you clarify the ‘why’ behind this for me? I want to make sure I’ve got it right.”
  • “How does this support our goals for < x project / user >?”
And if the ask comes through asynchronously via email or Slack, I ask myself the following:
  1. Was this sent during working hours?
  2. Am I the only one who can answer this question / address this issue?
  3. Can I provide a short response that lets the person know their message was seen and that it’s on my radar?”
— Kait L. UXer via LinkedIn

Group Dynamics

Introverts may not initially thrive when it comes to group dynamics. They might wish to observe the group before deeply engaging. They can find it difficult to assert themselves in a group setting and may feel overwhelmed by the constant need for social interaction.

Additionally, introverts may find it harder to contribute to discussions or be slower to form meaningful connections with others in a group. The extroverted nature of group dynamics can be draining for introverts, and they may require more time to recharge after being in a group setting.

  • Prepare in advance: Gather your thoughts, jot down key points, or even practice your delivery. This can help you feel more confident and articulate in group settings.
  • Take breaks: If a meeting is dragging on, step out for a few minutes to recharge. A quick walk or a moment of solitude can do wonders for your energy levels.
  • Seek one-on-one interactions: If you’re struggling to be heard in a group, try scheduling separate meetings with key stakeholders to share your insights or design concepts in a more intimate setting.
  • Utilize virtual collaboration tools: If in-person meetings are particularly draining, suggest using tools like Slack, Miro, or Figma for asynchronous collaboration and feedback.

Introverts often find creative ways to navigate the challenges of large group settings. One UX researcher shared their experience on LinkedIn:

“I have a monthly meeting with many employees (50+) to go over survey results. I realized it was super awkward for me just to wait as people joined the meeting. I tried to make small talk about upcoming weekend plans or what people had done over the weekend, but engagement was still pretty low, and I was not equipped enough to carry on conversations. I decided to fill the time with memes. I would search for user research memes and tie them into why user research is important. More people started coming to my meetings just to see the meme! As time went on, I became known as the meme person. While I can’t necessarily say if that’s a good thing — brand awareness is powerful! At least people know user research exists and that we’re fun — even if it all started from being awkward and introverted.”

— Anonymous LinkedIn UXer

Guidance For Moving Up As An Introverted Researcher Or Designer

I turned to Tim Yeo to provide some insight into how introverts can best prepare for moving up the career ladder. Tim provided some tactical advice focusing on teamwork and people skills:

“Practice your people skills. If you, as an individual, could do it all on your own, you would’ve probably done it already. If you can’t, then you need to work with people to bring your creation to life. It takes a team.”

Tim also shared the strategic reason behind the importance of leaders having excellent people skills:

“We also like to believe that higher management is always more sure, more right, and has all the answers. In my experience, the reality is almost the opposite. Problems get fuzzier, messier, and more complex the higher up the organization you go. Making decisions with incomplete, imperfect information is the norm. To operate successfully in this environment requires steering people to your worldview, and that takes people skills.”

You can find some additional information on ways for introverts (and extroverts) to gain people skills in some of the references listed at the end of this article.

Let’s move on and wrap up with some tips for those who are working alongside introverts.

Tips For Managers And Colleagues of Introverts

If you are a manager of a team consisting of more than yourself, you likely have an introvert among your team. Tim Yeo states, “Research from Susan Cain’s book, Quiet, shows that 1/3 to 1/2 of our population identify as quiet or introverted.”

Therefore,

“If you work in a diverse team, it follows that 1/3 to 1/2 of your team are quiet. So if you don’t create a space for quiet ones to be heard, that means you are missing out on 1/3 to 1/2 of ideas.”

UX managers of teams, including introverts and extroverts, should engage in some of the following suggested practices to create an inclusive work environment where everyone feels valued, heard, and able to contribute effectively to the team’s success. UX managers can use these tips to foster a diverse and productive team dynamic that drives innovation and creativity.

  • Flexibility
    • Offer communication options: Not everyone thrives in the same communication environment. Provide alternatives to large meetings, such as email updates, one-on-one check-ins, or asynchronous communication tools like Slack.
    • Embrace different work styles: Recognize that not everyone is most productive in a bustling office environment. Allow for flexible work arrangements, such as remote work or flexible hours, to accommodate different needs and preferences.
  • Value Diversity
    • Recognize the strengths of introverts: Introverts bring a unique perspective and valuable skills to the table. Encourage their contributions, celebrate their successes, and create an environment where they feel comfortable sharing their ideas.
    • Foster inclusivity: Make sure everyone on the team feels heard and valued, regardless of their personality type. Encourage open communication, active listening, and mutual respect.
  • Create Safe Spaces
    • Provide quiet spaces: Designate areas in the office where people can go to work independently or simply decompress.
    • Encourage breaks: Remind your team to take regular breaks throughout the day to recharge. This could involve stepping outside for fresh air, taking a short walk, or simply closing their eyes for a few minutes of meditation.
  • Professional Development
    • Offer tailored training: Provide opportunities for introverted researchers and designers to develop their communication and presentation skills in a supportive environment. This could involve workshops, coaching, or mentorship programs.

As a bonus, if you’re an introverted UX Manager and you are managing a team composed of introverts and extroverts, remember to encourage a variety of communication channels for your team members. You might default to your preferred style of communication but recognize that different team members may prefer different communication channels.

Some extroverted team members might enjoy brainstorming in large meetings, and introverted team members might prefer to contribute their ideas through written channels such as email, chat, or discussion boards.

Encouraging a variety of communication channels ensures that all team members feel comfortable sharing their thoughts and ideas.

Tim Yeo provided this list of tactics for encouraging and engaging introverts in participating in discussion:

  • Sharing the agenda before the meeting (so your quiet teammates, who are amazing preppers, by the way, can prepare and be ready to contribute).
  • Using a mix of silent and think-out-loud activities in meetings (so people who process information differently can all perform).
  • Give a heads-up before you call on a quiet colleague to speak.
  • Offer to be a thinking partner (when your quiet colleague appears to be stuck on a piece of work).

Now, let’s move on to focus on some tips for managing remote workers.

Recommendations For Managers And Teams Working Remotely

Managers and colleagues play a crucial role in creating a supportive and inclusive environment for introverted researchers and designers on dispersed teams. Here are some strategies to consider:

  1. Intentional Communication
    • Asynchronous First: Prioritize asynchronous communication methods (email, project management tools, shared documents) for brainstorming, feedback, and routine updates. This gives introverts time to process information and craft thoughtful responses.
    • One-on-One Check-Ins: Schedule regular one-on-one meetings with introverted team members to build rapport, discuss their concerns, and offer individualized support.
    • Mindful Meeting Management: Be mindful of meeting frequency and duration. Consider alternatives to video calls when possible, such as shared documents or asynchronous communication channels. When video calls are necessary, ensure they have a clear agenda and purpose.
  2. Creating Virtual Water Cooler Moments
    • Casual Communication Channels: Set up dedicated IM channels or virtual spaces for non-work-related conversations, allowing for informal social interaction and team bonding.
    • Virtual Social Events: Organize virtual coffee chats, game nights, or team-building activities to foster camaraderie and connection outside of work-related tasks.
    • Collaborative Tools: Utilize virtual whiteboards or shared documents for brainstorming sessions, encouraging participation and idea generation from all team members.
  3. Cultivating Empathy and Understanding
    • Education and Awareness: Share articles or resources about introversion with the team to foster understanding and appreciation for different personality types.
    • Open Dialogue: Encourage open conversations about communication styles and preferences, creating a safe space for everyone to express their needs.
    • Celebrate Strengths: Highlight the unique contributions that introverted team members bring to the table, such as their deep listening skills, thoughtful analysis, and ability to advocate for users.
  4. Leadership Support
    • Model Inclusive Behavior: Managers should lead by example, demonstrating respect for different communication styles and creating opportunities for all team members to contribute.
    • Provide Resources: Offer training or workshops on effective virtual communication and collaboration, tailoring them to the needs of introverted team members.
    • Check-In Regularly: Regularly touch base with introverted team members to gauge their well-being, address any concerns, and offer support.

Managers and teams can implement these strategies to create a work environment that values and empowers introverted researchers and designers, enabling them to thrive and make significant contributions to the team’s success.

Conclusion

We create a more inclusive and productive environment when we understand and appreciate the unique needs and preferences of introverts. Whether you’re an introverted UXer navigating the challenges of remote work or a manager looking to foster a diverse and engaged team, the strategies and insights shared in this article can help you unlock the full potential of introverted talent.

“The superpower of introspection that comes with introversion has enabled me to reflect on my behaviours and engineer myself to become more of an omnivert — able to adapt to different situations.

Being self-aware and working hard to ladder up through increasingly more challenging experiences has taken me from an introvert who was too concerned to tweet to an active leader in the community, delivering seminars, speaking at an international conference and now running a mentorship program for hundreds of UX professionals across the globe.”

— Chris C. UX Master Certified, via LinkedIn

Introversion is not a weakness to be overcome but a valuable asset to be embraced. We build stronger teams, foster innovation, and ultimately deliver more meaningful and impactful user experiences when we create a culture that celebrates both introverted and extroverted strengths. The best solutions often emerge from a blend of diverse perspectives, including the quiet voices that deserve to be heard.

In closing, I’d like to use the words of Tim Yeo, who provides us with some inspiration and positive reinforcement of who we are as introverts:

“You are enough. The world may continue to favour the extrovert ideal, but pretending to be someone you are not will never feel right. Know that there is a different path to have impact at work where you don’t have to pretend to be someone you are not. That path comes from tiny habits, done well, accumulated over time.”

[You can learn more about tiny habits in Tim’s book The Quiet Achiever]

Biography And Additional Resources

]]>
hello@smashingmagazine.com (Victor Yocco)
<![CDATA[SVG Coding Examples: Useful Recipes For Writing Vectors By Hand]]> https://smashingmagazine.com/2024/09/svg-coding-examples-recipes-writing-vectors-by-hand/ https://smashingmagazine.com/2024/09/svg-coding-examples-recipes-writing-vectors-by-hand/ Wed, 18 Sep 2024 09:00:00 GMT Even though I am the kind of front-end engineer who manually cleans up SVG files when they are a mess, I never expected to become one of those people. You know, those crazy people that draw with code.

But here we are.

I dove deep into SVG specs last winter when I created a project to draw Calligraphy Grids, and even though I knew the basic structures and rules of SVG, it was only then that I fully tried to figure out and understand what all of those numbers meant and how they interacted with each other.

And, once you get the hang of it, it is actually very interesting and quite fun to code SVG by hand.

No <path> ahead

We won’t go into more complex SVG shapes like paths in this article, this is more about practical information for simple SVGs. When it comes to drawing curves, I still recommend using a tool like Illustrator or Affinity. However, if you are super into compounding your lines, a path is useful. Maybe we’ll do that in Part 2.

Also, this guide focuses mostly on practical examples that illustrate some of the math involved when drawing SVGs. There is a wonderful article here that goes a bit deeper into the specs, which I recommend reading if you’re more interested in that: “A Practical Guide To SVG And Design Tools.”
Drawing With Math. Remember Coordinate Systems?

Illustrator, Affinity, and all other vector programs are basically just helping you draw on a coordinate system, and then those paths and shapes are stored in SVG files.

If you open up these files in an editor, you’ll see that they are just a bunch of paths that contain lots of numbers, which are coordinates in that coordinate system that make up the lines.

But, there is a difference between the all-powerful <path> and the other, more semantic elements like <rect>, <circle>, <line>, <ellipse>, <polygon>, and <polyline>.

These elements are not that hard to read and write by hand, and they open up a lot of possibilities to add animation and other fun stuff. So, while most people might only think of SVGs as never-pixelated, infinitely scaling images, they can also be quite comprehensive pieces of code.

How Does SVG Work? unit != unit

Before we get started on how SVG elements are drawn, let’s talk about the ways units work in SVG because they might be a bit confusing when you first get started.

The beauty of SVG is that it’s a vector format, which means that the units are somewhat detached from the browser and are instead just relative to the coordinate system you’re working in.

That means you would not use a unit within SVG but rather just use numbers and then define the size of the document you’re working with.

So, your width and height might be using CSS rem units, but in your viewBox, units become just a concept that helps you in establishing sizing relationships.

What Is The viewBox?

The viewBox works a little bit like the CSS aspect-ratio property. It helps you establish a relationship between the width and the height of your coordinate system and sets up the box you’re working in. I tend to think of the viewBox as my “document” size.

Any element that is placed within the SVG with bigger dimensions than the viewBox will not be visible. So, the viewBox is the cutout of the coordinate system we’re looking through. The width and height attributes are unnecessary if there is a viewBox attribute.

So, in short, having an SVG with a viewBox makes it behave a lot like a regular image. And just like with images, it’s usually easiest to just set either a width or a height and let the other dimension be automatically sized based on the intrinsic aspect ratio dimensions.

So, if we were to create a function that draws an SVG, we might store three separate variables and fill them in like this:

`<svg 
  width="${svgWidth}" 
  viewBox="0 0 ${documentWidth} ${documentHeight}" 
  xmlns="http://www.w3.org/2000/svg"
>`;

SVG Things Of Note

There is a lot to know about SVG: When you want to reuse an image a lot, you may want to turn it into a symbol that can then be referenced with a use tag, you can create sprites, and there are some best practices when using them for icons, and so on.

Unfortunately, this is a bit out of the scope of this article. Here, we’re mainly focusing on designing SVG files and not on how we can optimize and use them.

However, one thing of note that is easier to implement from the start is accessibility.

SVGs can be used in an <img> tag, where alt tags are available, but then you lose the ability to interact with your SVG code, so inlining might be your preference.

When inlining, it’s easiest to declare role="img" and then add a <title> tag with your image title.

Note: You can check out this article for SVG and Accessibility recommendations.

<svg
  role="img"
  [...attr]
>
  <title>An accessible title</title>
  <!-- design code -->
</svg>
Drawing SVG With JavaScript

There is usually some mathematics involved when drawing SVGs. It’s usually fairly simple arithmetic (except, you know, in case you draw calligraphy grids and then have to dig out trigonometry…), but I think even for simple math, most people don’t write their SVGs in pure HTML and thus would like to use algebra.

At least for me, I find it much easier to understand SVG Code when giving meaning to numbers, so I always stick to JavaScript, and by giving my coordinates names, I like them immeasurable times more.

So, for the upcoming examples, we’ll look at the list of variables with the simple math and then JSX-style templates for interpolation, as that gives more legible syntax highlighting than string interpolations, and then each example will be available as a CodePen.

To keep this Guide framework-agnostic, I wanted to quickly go over drawing SVG elements with just good old vanilla JavaScript.

We’ll create a container element in HTML that we can put our SVG into and grab that element with JavaScript.

<div data-svg-container></div>
<script src="template.js"></script>

To make it simple, we’ll draw a rectangle <rect> that covers the entire viewBox and uses a fill.

Note: You can add all valid CSS values as fills, so a fixed color, or something like currentColor to access the site’s text color or a CSS variable would work here if you’re inlining your SVG and want it to interact with the page it’s placed in.

Let’s first start with our variable setup.

// vars
const container = document.querySelector("[data-svg-container]");
const svgWidth = "30rem"; // use any value with units here
const documentWidth = 100;
const documentHeight = 100;
const rectWidth = documentWidth;
const rectHeight = documentHeight;
const rectFill = "currentColor"; // use any color value here
const title = "A simple square box";

Method 1: Create Element and Set Attributes

This method is easier to keep type-safe (if using TypeScript) — uses proper SVG elements and attributes, and so on — but it is less performant and may take a long time if you have many elements.

const svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
const titleElement = document.createElementNS("http://www.w3.org/2000/svg", "title");
const rect = document.createElementNS("http://www.w3.org/2000/svg", "rect");

svg.setAttribute("width", svgWidth);
svg.setAttribute("viewBox", 0 0 ${documentWidth} ${documentHeight});
svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");
svg.setAttribute("role", "img");

titleElement.textContent = title;

rect.setAttribute("width", rectWidth);
rect.setAttribute("height", rectHeight);
rect.setAttribute("fill", rectFill);

svg.appendChild(titleElement);
svg.appendChild(rect);

container.appendChild(svg);

Here, you can see that with the same coordinates, a polyline won’t draw the line between the blue and the red dot, while a polygon will. However, when applying a fill, they take the exact same information as if the shape was closed, which is the right side of the graphic, where the polyline makes it look like a piece of a circle is missing.

This is the second time where we have dealt with quite a bit of repetition, and we can have a look at how we could leverage the power of JavaScript logic to render our template faster.

But first, we need a basic implementation like we’ve done before. We’re creating objects for the circles, and then we’re chaining the cx and cy values together to create the points attribute. We’re also storing our transforms in variables.

const polyDocWidth = 200;
const polyDocHeight = 200;
const circleOne = { cx: 25, cy: 80, r: 10, fill: "red" };
const circleTwo = { cx: 40, cy: 20, r: 5, fill: "lime" };
const circleThree = { cx: 70, cy: 60, r: 8, fill: "cyan" };
const points = ${circleOne.cx},${circleOne.cy} ${circleTwo.cx},${circleTwo.cy} ${circleThree.cx},${circleThree.cy};
const moveToTopRight = translate(${polyDocWidth / 2}, 0);
const moveToBottomRight = translate(${polyDocWidth / 2}, ${polyDocHeight / 2});
const moveToBottomLeft = translate(0, ${polyDocHeight / 2});

And then, we apply the variables to the template, using either a polyline or polygon element and a fill attribute that is either set to none or a color value.


<svg
  width={svgWidth}
  viewBox={`0 0 ${polyDocWidth} ${polyDocHeight}`}
  xmlns="http://www.w3.org/2000/svg"
  role="img"
>
  <title>Composite shape comparison</title>
  <g>
    <circle
      cx={circleOne.cx}
      cy={circleOne.cy}
      r={circleOne.r}
      fill={circleOne.fill}
    />
    <circle
      cx={circleTwo.cx}
      cy={circleTwo.cy}
      r={circleTwo.r}
      fill={circleTwo.fill}
    />
    <circle
      cx={circleThree.cx}
      cy={circleThree.cy}
      r={circleThree.r}
      fill={circleThree.fill}
    />
    <polyline
      points={points}
      fill="none"
      stroke="black"
    />
  </g>
  <g transform={moveToTopRight}>
    <circle
      cx={circleOne.cx}
      cy={circleOne.cy}
      r={circleOne.r}
      fill={circleOne.fill}
    />
    <circle
      cx={circleTwo.cx}
      cy={circleTwo.cy}
      r={circleTwo.r}
      fill={circleTwo.fill}
    />
    <circle
      cx={circleThree.cx}
      cy={circleThree.cy}
      r={circleThree.r}
      fill={circleThree.fill}
    />
    <polyline
      points={points}
      fill="white"
      stroke="black"
    />
  </g>
  <g transform={moveToBottomLeft}>
    <circle
      cx={circleOne.cx}
      cy={circleOne.cy}
      r={circleOne.r}
      fill={circleOne.fill}
    />
    <circle
      cx={circleTwo.cx}
      cy={circleTwo.cy}
      r={circleTwo.r}
      fill={circleTwo.fill}
    />
    <circle
      cx={circleThree.cx}
      cy={circleThree.cy}
      r={circleThree.r}
      fill={circleThree.fill}
    />
    <polygon
      points={points}
      fill="none"
      stroke="black"
    />
  </g>
  <g transform={moveToBottomRight}>
    <circle
      cx={circleOne.cx}
      cy={circleOne.cy}
      r={circleOne.r}
      fill={circleOne.fill}
    />
    <circle
      cx={circleTwo.cx}
      cy={circleTwo.cy}
      r={circleTwo.r}
      fill={circleTwo.fill}
    />
    <circle
      cx={circleThree.cx}
      cy={circleThree.cy}
      r={circleThree.r}
      fill={circleThree.fill}
    />
    <polygon
      points={points}
      fill="white"
      stroke="black"
    />
  </g>
</svg>

And here’s a version of it to play with:

See the Pen SVG Polygon / Polyline (simple) [forked] by Myriam.

Dealing With Repetition

When it comes to drawing SVGs, you may find that you’ll be repeating a lot of the same code over and over again. This is where JavaScript can come in handy, so let’s look at the composite example again and see how we could optimize it so that there is less repetition.

Observations:

  • We have three circle elements, all following the same pattern.
  • We create one repetition to change the fill style for the element.
  • We repeat those two elements one more time, with either a polyline or a polygon.
  • We have four different transforms (technically, no transform is a transform in this case).

This tells us that we can create nested loops.

Let’s go back to just a vanilla implementation for this since the way loops are done is quite different across frameworks.

You could make this more generic and write separate generator functions for each type of element, but this is just to give you an idea of what you could do in terms of logic. There are certainly still ways to optimize this.

I’ve opted to have arrays for each type of variation that we have and wrote a helper function that goes through the data and builds out an array of objects with all the necessary information for each group. In such a short array, it would certainly be a viable option to just have the data stored in one element, where the values are repeated, but we’re taking the DRY thing seriously in this one.

The group array can then be looped over to build our SVG HTML.

const container = document.querySelector("[data-svg-container]");
const svgWidth = 200;
const documentWidth = 200;
const documentHeight = 200;
const halfWidth = documentWidth / 2;
const halfHeight = documentHeight / 2;
const circles = [
  { cx: 25, cy: 80, r: 10, fill: "red" },
  { cx: 40, cy: 20, r: 5, fill: "lime" },
  { cx: 70, cy: 60, r: 8, fill: "cyan" },
];
const points = circles.map(({ cx, cy }) => ${cx},${cy}).join(" ");
const elements = ["polyline", "polygon"];
const fillOptions = ["none", "white"];
const transforms = [
  undefined,
  translate(${halfWidth}, 0),
  translate(0, ${halfHeight}),
  translate(${halfWidth}, ${halfHeight}),
];
const makeGroupsDataObject = () => {
  let counter = 0;
  const g = [];
  elements.forEach((element) => {
    fillOptions.forEach((fill) => {
      const transform = transforms[counter++];
      g.push({ element, fill, transform });
    });
  });
  return g;
};
const groups = makeGroupsDataObject();
// result:
// [
//   {
//     element: "polyline",
//     fill: "none",
//   },
//   {
//     element: "polyline",
//     fill: "white",
//     transform: "translate(100, 0)",
//   },
//   {
//     element: "polygon",
//     fill: "none",
//     transform: "translate(0, 100)",
//   },
//   {
//     element: "polygon",
//     fill: "white",
//     transform: "translate(100, 100)",
//   }
// ]

const svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
svg.setAttribute("width", svgWidth);
svg.setAttribute("viewBox", 0 0 ${documentWidth} ${documentHeight});
svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");
svg.setAttribute("role", "img");
svg.innerHTML = "<title>Composite shape comparison</title>";
groups.forEach((groupData) => {
  const circlesHTML = circles
    .map((circle) => {
      return &lt;circle 
          cx="${circle.cx}" 
          cy="${circle.cy}" 
          r="${circle.r}" 
          fill="${circle.fill}"
        /&gt;;
    })
    .join("");
  const polyElementHTML = &lt;${groupData.element} 
      points="${points}" 
      fill="${groupData.fill}" 
      stroke="black" 
    /&gt;;
  const group = &lt;g ${groupData.transform ?transform="${groupData.transform}": ""}&gt;
        ${circlesHTML}
        ${polyElementHTML}
      &lt;/g&gt;;
  svg.innerHTML += group;
});
container.appendChild(svg);

And here’s the Codepen of that:

See the Pen SVG Polygon / Polyline (JS loop version) [forked] by Myriam.

More Fun Stuff

Now, that’s all the basics I wanted to cover, but there is so much more you can do with SVG. There is more you can do with transform; you can use a mask, you can use a marker, and so on.

We don’t have time to dive into all of them today, but since this started for me when making Calligraphy Grids, I wanted to show you the two most satisfying ones, which I, unfortunately, can’t use in the generator since I wanted to be able to open my generated SVGs in Affinity and it doesn’t support pattern.

Okay, so pattern is part of the defs section within the SVG, which is where you can define reusable elements that you can then reference in your SVG.

Graph Grid with pattern

If you think about it, a graph is just a bunch of horizontal and vertical lines that repeat across the x- and y-axis.

So, pattern can help us with that. We can create a <rect> and then reference a pattern in the fill attribute of the rect. The pattern then has its own width, height, and viewBox, which defines how the pattern is repeated.

So, let’s say we want to perfectly center our graph grid in any given width or height, and we want to be able to define the size of our resulting squares (cells).

Once again, let’s start with the JavaScipt variables:

const graphDocWidth = 226;
const graphDocHeight = 101;
const cellSize = 5;
const strokeWidth = 0.3;
const strokeColor = "currentColor";
const patternHeight = (cellSize / graphDocHeight) * 100;
const patternWidth = (cellSize / graphDocWidth) * 100;
const gridYStart = (graphDocHeight % cellSize) / 2;
const gridXStart = (graphDocWidth % cellSize) / 2;

Now, we can apply them to the SVG element:

<svg
  width={svgWidth}
  viewBox={`0 0 ${graphDocWidth} ${graphDocHeight}`}
  xmlns="http://www.w3.org/2000/svg"
  role="img"
>
  <defs>
    <pattern
      id="horizontal"
      viewBox={`0 0 ${graphDocWidth} ${strokeWidth}`}
      width="100%"
      height={`${patternHeight}%`}
    >
      <line
        x1="0"
        x2={graphDocWidth}
        y1={gridYStart}
        y2={gridYStart}
        stroke={strokeColor}
        stroke-width={strokeWidth}
      />
    </pattern>
    <pattern
      id="vertical"
      viewBox={`0 0 ${strokeWidth} ${graphDocHeight}`}
      width={`${patternWidth}%`}
      height="100%"
    >
      <line
        y1={0}
        y2={graphDocHeight}
        x1={gridXStart}
        x2={gridXStart}
        stroke={strokeColor}
        stroke-width={strokeWidth}
      />
    </pattern>
  </defs>
  <title>A graph grid</title>
  <rect
    width={graphDocWidth}
    height={graphDocHeight}
    fill="url(#horizontal)"
  />
  <rect
    width={graphDocWidth}
    height={graphDocHeight}
    fill="url(#vertical)"
  />
</svg>

And this is what that then looks like:

See the Pen SVG Graph Grid [forked] by Myriam.

Dot Grid With pattern

If we wanted to draw a dot grid instead, we could simply repeat a circle. Or, we could alternatively use a line with a stroke-dasharray and stroke-dashoffset to create a dashed line. And we’d only need one line in this case.

Starting with our JavaScript variables:

const dotDocWidth = 219;
const dotDocHeight = 100;
const cellSize = 4;
const strokeColor = "black";
const gridYStart = (dotDocHeight % cellSize) / 2;
const gridXStart = (dotDocWidth % cellSize) / 2;
const dotSize = 0.5;
const patternHeight = (cellSize / dotDocHeight) * 100;

And then adding them to the SVG element:

<svg
  width={svgWidth}
  viewBox={`0 0 ${dotDocWidth} ${dotDocHeight}`}
  xmlns="http://www.w3.org/2000/svg"
  role="img"
>
  <defs>
    <pattern
      id="horizontal-dotted-line"
      viewBox={`0 0 ${dotDocWidth} ${dotSize}`}
      width="100%"
      height={`${patternHeight}%`}
    >
      <line
        x1={gridXStart}
        y1={gridYStart}
        x2={dotDocWidth}
        y2={gridYStart}
        stroke={strokeColor}
        stroke-width={dotSize}
        stroke-dasharray={`0,${cellSize}`}
        stroke-linecap="round"
      ></line>
    </pattern>
  </defs>
  <title>A Dot Grid</title>
  <rect
    x="0"
    y="0"
    width={dotDocWidth}
    height={dotDocHeight}
    fill="url(#horizontal-dotted-line)"
  ></rect>
</svg>

And this is what that looks like:

See the Pen SVG Dot Grid [forked] by Myriam.

Conclusion

This brings us to the end of our little introductory journey into SVG. As you can see, coding SVG by hand is not as scary as it seems. If you break it down into the basic elements, it becomes quite like any other coding task:

  • We analyze the problem,
  • Break it down into smaller parts,
  • Examine each coordinate and its mathematical breakdown,
  • And then put it all together.

I hope that this article has given you a starting point into the wonderful world of coded images and that it gives you the motivation to delve deeper into the specs and try drawing some yourself.

]]>
hello@smashingmagazine.com (Myriam Frisano)
<![CDATA[Creating Custom Lottie Animations With SVGator]]> https://smashingmagazine.com/2024/09/creating-custom-lottie-animations-svgator/ https://smashingmagazine.com/2024/09/creating-custom-lottie-animations-svgator/ Tue, 17 Sep 2024 11:00:00 GMT This article is a sponsored by SVGator

SVGator has gone through a series of updates since our last article, which was published in 2021, when it was already considered to be the most advanced web-based tool for vector animation. The first step toward more versatile software came with the mobile export feature that made it possible to implement the animations in iOS and Android applications.

The animation tool continued its upgrade with a series of new export options: video formats including MP4, AVI, MKV, MOV, and WebM, as well as image formats such as GIF, Animated PNG, WebP, and image sequence. By covering a larger area of users’ needs, the app now enables anyone to create animated stickers, social media, and newsletter animations, video assets, and many more types of visual content on demand.

The goal of becoming a “one tool for all” still lacked the last piece of the puzzle, namely full support for Lottie files. Lottie, just like SVG, is a vector-based format, but it has even better comprehensive multi-platform support, a fact that makes it super popular among developers and design professionals. It is built for use across various platforms, enabling smooth integration into both web and mobile applications. Its file size is minimal, it is infinitely scalable, and developers find it straightforward to implement once they get familiar with the format. Lottie can incorporate raster graphics and also supports interactivity.

SVGator’s latest version has everything you need for your various applications without the need for any third-party apps or plug-ins.

Note: You can test all of SVGator’s functionalities free of charge before committing to the Pro plan. However, you can export up to three watermarked files, with videos and GIFs limited to basic quality.

In this article, we will follow a creation process made of these steps:

  • Importing an existent Lottie JSON and making some minor adjustments;
  • Importing new animated assets created with SVGator (using the library);
  • Creating and animating new elements from scratch;
  • Exporting the Lottie animation.
Getting Started With SVGator

The sign-up process is simple, fast, and straightforward, and no credit card is required. Sign up either with Google or Facebook or, alternatively, by providing your name, email address, and password. Start a project either with a Lottie animation or a static SVG. If you don’t have an existing file, you can design and animate everything starting from a blank canvas.

Now that you’ve created your account, let’s dive right into the fun part. Here’s a preview of how your animation is going to look by the time you’re done following this guide. Neat, right?

(Large preview) Create A New Project

After logging in and clicking on the New Project option, you will be taken to the New Project Panel, where you can choose between starting from a blank project or uploading a file. Let’s start this project with an existing Lottie JSON.

  1. Click on the Upload file button and navigate to the directory where you have saved your Lottie file.

  2. Select the “Fast response.json” file and click Open.

    Hit play in the editor, and the animation should look like this:

    (Large preview)

Note: Make sure to hit Save after each step to make sure you don’t lose any of your progress while working on this project alongside our guide.

Import An Animated Asset

In this step, you will learn how to use the Library to import new assets to your project. You can easily choose from a variety of ready-made SVGs stored in different categories, load new files from your computer (Lottie, static SVG, and images), or save animations from other SVGator projects and reuse them.

In this case, let’s use an animated message bubble previously created and saved to the Uploads section of the Library.

Learn how to create and save animated assets with this short video tutorial.

  1. Navigate to the left sidebar of the app and switch to the Library tab, then click the “+” icon to upload the message bubble asset that you downloaded earlier.
  2. After it is loaded in the uploads section, simply click on it to add it to your project.

    All the animated properties of the asset are now present in the timeline, and you can edit them if you want.

    Note: Make sure the playhead is at the second “0” before adding the animated asset. When adding an animated asset, it will always start animating from the point where the playhead is placed.
  3. Freely adjust its position and size as you wish.
  4. With the playhead at the second 0, click on the Animate button, then choose Position.

At this point, you should have the first Position keyframe automatically added at the second 0, and you are ready to start animating.

Animate The Message Bubble
  1. Start by dragging the playhead on the timeline at 0.2 seconds:

  2. Then, drag the message bubble up a few pixels. The second keyframe will appear in the timeline, marking the element’s new position, thus creating the 2 milliseconds animation.

    Note: You can hit Play at any moment to check how everything looks!

    Next, you can use the Scale animator to make the bubble disappear after the dots representing the typing are done animating by scaling it down to 0 for both the X and Y axes:

  3. With the message bubble still selected, drag the playhead at 2.2 seconds, click on Animate, and select Scale (or just press Shift + S on the keyboard) to set the first Scale keyframe, then drag the playhead at 2.5 seconds.
  4. Set the scale properties to 0 for both the X and Y axes (in the right side panel). The bubble won’t be visible anymore at this point.

    Note: To maintain the ratio while changing the scale values, make sure you have the Maintain proportions on (the link icon next to the scale inputs).

    To add an extra touch of interest to this scaling motion, add an easing function preset:

  5. First, jump back to the first Scale keyframe (you can also double-click the keyframe to jump the playhead right at it).
  6. Open the Easing Panel next to the time indicator and scroll down through the presets list, then select Ease in Back. Due to its bezier going out of the graph, this easing function will create a bounce-back effect for the scale animation.

    Note: You can adjust the bezier of a selected easing preset and create a new custom function, which will appear at the top of the list.

    Keep in mind that you need at least one keyframe selected if you intend to apply an easing. The easing function will apply from the selected keyframe toward the next keyframe at its right. Of course, you can apply a certain easing for multiple keyframes at once.

    To get a smoother transition when the message bubble disappears, add an Opacity animation of one millisecond at the end of the scaling:

  7. Choose Opacity from the animators’ list and set the first keyframe at 2.4 seconds, then drag the playhead at 2.5 seconds to match the ending keyframe from the scale animation above.
  8. From the Appearance panel, drag the Opacity slider all the way to the left, at 0%.
Create An Email Icon

For the concept behind this animation to be complete, let’s create (and later animate) a “new email” notification as a response to the character sending that message.

Once again, SVGator’s asset library comes in handy for this step:

  1. Go to the search bar from the Library and type in “mail,” then click on the mail asset from the results.
  2. Place it somewhere above the laptop. Edit the mail icon to better fit the style of the animation:

  3. Open the email group and select the rectangle from the back.
  4. Change its fill color to a dark purple.
  5. Round up the corners using the Radius slider.

  6. Make the element’s design minimal by deleting these two lines from the lower part of the envelope.

  7. Select the envelope seal flap, which is the Polyline element in the group, above the rectangle.
  8. Add a lighter purple for the fill, set the stroke to 2 px width, and also make it white.

    To make the animation even more interesting, create a notification alert in the top-right corner of the envelope:

  9. Use the Ellipse tool (O) from the toolbar on top and draw a circle in the top-right corner of the envelope.
  10. Choose a nice red color for the fill, and set the stroke to white with a 2 px width.

  11. Click on the “T” icon to select the Text tool.
  12. Click on the circle and type “1”.
  13. Set the color to white and click on the “B” icon to make it bold.

  14. Select both the red circle and the number, and group them: right-click, and hit Group.

    You can also hit Command or Ctrl + G on your keyboard. Double-click on the newly created group to rename it to “Notification.”

  15. Select both the notification group and email group below and create a new group, which you can name “new email.”
Animate The New Email Group

Let’s animate the new email popping out of the laptop right after the character has finished texting his message:

  1. With the “New email” group selected, click twice on the Move down icon from the header to place the group last.
    You can also press Command or Ctrl + arrow down on your keyboard.
  2. Drag the group behind the laptop (on the canvas) to hide it entirely, and also scale it down a little.
  3. With the playhead at 3 seconds, add the animators Scale and Position.
    You can also do that by pressing Shift + S and Shift + P on your keyboard.

  4. Drag the playhead at the second 3.3 on the timeline.
  5. Move the New Email group above the laptop and scale it up a bit.
  6. You can also bend the motion path line to create a curved trajectory for the position animation.

  7. Select the first keyframes at the second 3.
  8. Open the easing panel.
  9. And click on the Ease Out Cubic preset to add it to both keyframes.
Animate The Notification

Let’s animate the notification dot separately. We’ll make it pop in while the email group shows up.

  1. Select the Notification group.
  2. Create a scale-up animation for it with 0 for both the X and Y axes at 3.2 and 1 at 3.5 seconds.
  3. Select the first keyframe and, from the easing panel, choose Ease Out Back. This easing function will ensure the popping effect.
Add Expressiveness To The Character

Make the character smile while looking at the email that just popped out. For this, you need to animate the stroke offset of the mouth:

  1. Select the mouth path. You can use the Node tool to select it directly with one click.
  2. Drag the playhead at 3.5 seconds, which is the moment from where the smile will start.
  3. Select the last keyframe of the Stroke offset animator from the timeline and duplicate it at second 3.5, or you can also use Ctrl or Cmd + D for duplication.

  4. Drag the playhead at second 3.9.
  5. Go to the properties panel and set the Offset to 0. The stroke will now fill the path all the way, creating a stroke offset animation of 4 milliseconds.
Final Edits

You can still make all kinds of adjustments to your animation before exporting it. In this case, let’s change the color of the initial Lottie animation we used to start this project:

  1. Use the Node tool to select all the green paths that form the character’s arms and torso.
  2. Change the color as you desire.
Export Lottie

Once you’re done editing, you can export the animation by clicking on the top right Export button and selecting the Lottie format. Alternatively, you can press Command or Ctrl + E on your keyboard to jump directly to the export panel, from where you can still select the animation you want to export.

  1. Make sure the Lottie format is selected from the dropdown. In the export panel, you can set a name for the file you are about to export, choose the frame rate and animation speed, or set a background color.
  2. You can preview the Lottie animation with a Lottie player.
    Note: This step is recommended to make sure all animations are supported in the Lottie format by previewing it on a webpage using the Lottie player. The preview in the export panel isn’t an actual Lottie animation.
  3. Get back to the export panel and simply click Export to download the Lottie JSON.
Final Thoughts

Now that you’re done with your animation don’t forget that you have plenty of export options available besides Lottie. You can post the same project on social media in video format, export it as an SVG animation for the web, or turn it into a GIF sticker or any other type of visual you can think of. GIF animations can also be used in Figma presentations and prototypes as a high-fidelity preview of the production-ready Lottie file.

We hope you enjoyed this article and that it will inspire you to create amazing Lottie animations in your next project.

Below, you can find a few useful resources to continue your journey with SVG and SVGator:

  • SVGator tutorials
    Check out a series of short video tutorials to help you get started with SVGator.
  • SVGator Help Center
    It answers the most common questions about SVGator, its features, and membership plans.
]]>
hello@smashingmagazine.com (Mihai Cora)
<![CDATA[How To Build Custom Data Visualizations Using Luzmo Flex]]> https://smashingmagazine.com/2024/09/how-build-custom-data-visualizations-luzmo-flex/ https://smashingmagazine.com/2024/09/how-build-custom-data-visualizations-luzmo-flex/ Thu, 12 Sep 2024 11:00:00 GMT This article is a sponsored by Luzmo

In this article, I’ll introduce you to Luzmo Flex, a new feature from the Luzmo team who have been working hard making developer tooling to flatten the on-ramp for analytics reporting and data visualization.

With Luzmo Flex, you can hook up a dataset and create beautifully crafted, fully customizable interactive charts that meet your reporting needs. They easily integrate and interact with other components of your web app, allowing you to move away from a traditional “dashboard” interface and build more bespoke data products.

While many charting libraries offer similar features, I often found it challenging to get the data into the right shape that the library needed. In this article, I’ll show you how you can build beautiful data visualizations using the Google Analytics API, and you won’t have to spend any time “massaging” the data!

What Is Luzmo Flex?

Well, it’s two things, really. First of all, Luzmo is a low-code platform for embedded analytics. You can create datasets from just about anything, connect them to APIs like Google Analytics or your PostgreSQL database, or even upload static data in a .csv file and start creating data visualizations with drag and drop.

Secondly, Luzmo Flex is their new React component that can be configured to create custom data visualizations. Everything from the way you query your data to the way you display it can be achieved through code using the LuzmoVizItemComponent.

What makes Luzmo Flex unique is that you can reuse the core functionalities of Luzmo’s low-code embedded analytics platform in your custom-coded components.

That means, besides creating ready-to-use datasets, you can set up functions like the following out-of-the-box:

  • Multi-tenant analytics: Showing different data or visualizations to different users of your app.
  • Localization: Displaying charts in multiple languages, currencies, and timezones without much custom development.
  • Interactivity: Set up event listeners to create complex interactivity between Luzmo’s viz items and any non-Luzmo components in your app.

What Can You Build With Luzmo Flex?

By combining these off-the-shelf functions with flexibility through code, Luzmo Flex makes a great solution for building bespoke data products that go beyond the limits of a traditional dashboard interface. Below are a few examples of what that could look like.

Report Builder

A custom report builder that lets users search and filter a dataset and render it out using a number of different charts.

Filter Panel

Enable powerful filtering using HTML Select inputs, which will update each chart shown on the page.

Wearables Dashboard

Or how about a sleep tracker hooked up to your phone to track all those important snoozes?

When to Consider Luzmo Flex vs Chart Libraries

When building data-intensive applications, using something like Recharts, a well-known React charting library, you’ll likely need to reformat the data to fit the required shape. For instance, if I request the top 3 page views from the last seven days for my site, paulie.dev, I would have to use the Google Analytics API using the following query.

import dotenv from 'dotenv';
import { BetaAnalyticsDataClient } from '@google-analytics/data';
dotenv.config();

const credentials = JSON.parse(
  Buffer.from(process.env.GOOGLE_APPLICATION_CREDENTIALS_BASE64, 'base64').toString('utf-8')
);

const analyticsDataClient = new BetaAnalyticsDataClient({
  credentials,
});

const [{ rows }] = await analyticsDataClient.runReport({
  property: properties/${process.env.GA4&#95;PROPERTY&#95;ID},
  dateRanges: [
    {
      startDate: '7daysAgo',
      endDate: 'today',
    },
  ],
  dimensions: [
    {
      name: 'fullPageUrl',
    },
    {
      name: 'pageTitle',
    },
  ],
  metrics: [
    {
      name: 'totalUsers',
    },
  ],
  limit: 3,
  metricAggregations: ['MAXIMUM'],
});

The response would look something like this:

[
  {
    "dimensionValues": [
      {
        "value": "www.paulie.dev/",
        "oneValue": "value"
      },
      {
        "value": "Paul Scanlon | Home",
        "oneValue": "value"
      }
    ],
    "metricValues": [
      {
        "value": "61",
        "oneValue": "value"
      }
    ]
  },
  {
    "dimensionValues": [
      {
        "value": "www.paulie.dev/posts/2023/11/a-set-of-sign-in-with-google-buttons-made-with-tailwind/",
        "oneValue": "value"
      },
      {
        "value": "Paul Scanlon | A set of: \"Sign In With Google\" Buttons Made With Tailwind",
        "oneValue": "value"
      }
    ],
    "metricValues": [
      {
        "value": "41",
        "oneValue": "value"
      }
    ]
  },
  {
    "dimensionValues": [
      {
        "value": "www.paulie.dev/posts/2023/10/what-is-a-proxy-redirect/",
        "oneValue": "value"
      },
      {
        "value": "Paul Scanlon | What Is a Proxy Redirect?",
        "oneValue": "value"
      }
    ],
    "metricValues": [
      {
        "value": "23",
        "oneValue": "value"
      }
    ]
  }
]

To make that data work with Recharts, I’d need to reformat it so it conforms to the following data shape.

[
  {
    "name": "Paul Scanlon | Home",
    "value": 61
  },
  {
    "name": "Paul Scanlon | A set of: \"Sign In With Google\" Buttons Made With Tailwind",
    "value": 41
  },
  {
    "name": "Paul Scanlon | What Is a Proxy Redirect?",
    "value": 23
  }
]

To accomplish this, I’d need to use an Array.prototype.map() to iterate over each item, destructure the relevant data and return a key-value pair for the name and value for each.

const data = response.rows.map((row) => {
  const { dimensionValues, metricValues } = row;

  const pageTitle = dimensionValues[1].value;
  const totalUsers = parseInt(metricValues[0].value);

  return {
    name: pageTitle,
    value: totalUsers,
  };
});

And naturally, if you’re reformatting data this way in your application, you’d also want to write unit tests to ensure the data is always formatted correctly to avoid breaking your application… and all of this before you even get on to creating your charts!

With Luzmo Flex, all of this goes away, leaving you more time to focus on which data to display and how best to display it.

The First Steps to Building Bespoke Data Products

Typically, when building user interfaces that display data insights, your first job will be to figure out how to query the data source. This can take many forms, from RESTful API requests to direct database queries or sometimes reading from static files. Your next job will be figuring out when and how often these requests need to occur.

  • For data that rarely changes: Perhaps a query in the build step will work.
  • For data that changes regularly: A server-side request on page load.
  • For ever-changing data: A client-side request that polls an API on an interval.

Each will likely inform your application’s architecture, and there’s no single solution to this. Your last job, as mentioned, will be wrangling the responses, reformatting the data, and displaying it in the UI.

Below, I’ll show you how to do this using Luzmo Flex by using a simple example product.

What We’re Building: Custom Data Visualizations As Code

Here’s a screenshot of a simple data product I’ve built that displays three different charts for different reporting dimensions exposed by the Google Analytics API for page views for my site, paulie.dev, from the last seven days.

You can find all the code used in this article on the following link:

Getting Started With Luzmo

Before we get going, hop over to Luzmo and sign up for a free trial. You might also like to have a read of one of the getting started guides listed below. In this article, I’ll be using the Next.js starter.

Creating a Google Analytics Dataset

To create data visualization, you’ll first need data! To achieve this using Luzmo, head over to the dashboard, select Datasets from the navigation, and select GA4 Google Analytics. Follow the steps shown in the UI to connect Luzmo with your Google Analytics account.

With the setup complete, you can now select which reporting dimensions to add to your dataset. To follow along with this article, select Custom selection.

Lastly, select the following using the search input. Device Category, Page Title, Date, and Total users, then click Import when you’re ready.

You now have all the data required to build the Google Analytics dashboard. You can access the dataset ID from the URL address bar in your browser. You’ll need this in a later step.

If you’ve followed along from either of the first two getting started guides, you’ll have your API Key, API Token, App server, and API host environment variables set up and saved in a .env file.

Install Dependencies

If you’ve cloned one of the starter repositories, run the following to install the required dependencies.

npm install

Next, install the Luzmo React Embed dependency which exports the LuzmoVizItemComponent.

npm install  @luzmo/react-embed@latest

Now, find page.tsx located in the src/app directory, and add your dataset id as shown below.

Add the access object from the destructured response and pass access.datasets[0].id onto the LuzmoClientComponent component using a prop named datasetId.

// src/app/page.tsx


+ import dynamic from 'next/dynamic';

import Luzmo from '@luzmo/nodejs-sdk';
- import LuzmoClientComponent from './components/luzmo-client-component';
+ const LuzmoClientComponent = dynamic(() => import('./components/luzmo-client-component'), {
  ssr: false,
});


const client = new Luzmo({
  api_key: process.env.LUZMO_API_KEY!,
  api_token: process.env.LUZMO_API_TOKEN!,
  host: process.env.NEXT_PUBLIC_LUZMO_API_HOST!,
});

export default async function Home() {
  const response = await client.create('authorization', {
    type: 'embed',
    username: 'user id',
    name: 'first name last name',
    email: 'name@email.com',
    access: {
      datasets: [
        {
-          id: '<dataset_id>',
+          id: '42b43db3-24b2-45e7-98c5-3fcdef20b1a3',
          rights: 'use',
        },
      ],
    },
  });

-  const { id, token } = response;
+  const { id, token, access } = response;

-  return <LuzmoClientComponent authKey={id} authToken={token} />;
+  return <LuzmoClientComponent authKey={id} authToken={token} datasetId={access.datasets[0].id} />;
}

And lastly, find luzmo-client-component.tsx located in src/app/components. This is where you’ll be creating your charts.

Building a Donut Chart

The first chart you’ll create is a Donut chart that shows the various devices used by visitors to your site.

Add the following code to luzmo-client-component.tsx component.

// src/app/component/luzmo-client-component.tsx

'use client';

+ import { LuzmoVizItemComponent } from '@luzmo/react-embed';

interface Props {
  authKey: string;
  authToken: string;
+  datasetId: string;
}

- export default function LuzmoClientComponent({ authKey, authToken}: Props) {
+ export default function LuzmoClientComponent({ authKey, authToken, datasetId }: Props) {

+  const date = new Date(new Date().getTime() - 7 * 24 * 60 * 60 * 1000).toISOString(); // creates a date 7 days ago

  console.log({ authKey, authToken });

  return (
    <section>
+    <div className='w-1/2 h-80'>
+      <LuzmoVizItemComponent
+        appServer={process.env.NEXT_PUBLIC_LUZMO_APP_SERVER}
+        apiHost={process.env.NEXT_PUBLIC_LUZMO_API_HOST}
+        authKey={authKey}
+        authToken={authToken}
+        type='donut-chart'
+        options={{
+          title: {
+            en: Devices from last 7 days,
+          },
+          display: {
+            title: true,
+          },
+          mode: 'donut',
+          legend: {
+            position: 'bottom',
+          },
+        }}
+        slots={[
+          {
+            name: 'measure',
+            content: [
+              {
+                label: {
+                  en: 'Total users',
+                },
+                column: '<column id>', // Total users
+                set: datasetId,
+                type: 'numeric',
+                format: '.4f',
+              },
+            ],
+          },
+          {
+            name: 'category',
+            content: [
+              {
+                label: {
+                  en: 'Device category',
+                },
+                column: '<column id>', // Device category
+                set: datasetId,
+                type: 'hierarchy',
+              },
+            ],
+          },
+        ]}
+        filters={[
+          {
+            condition: 'or',
+            filters: [
+              {
+                expression: '? >= ?',
+                parameters: [
+                  {
+                    column_id: '<column id>', // Date
+                    dataset_id: datasetId,
+                  },
+                  date,
+                ],
+              },
+            ],
+          },
+        ]}
+      />
+    <div/>
    </section>
  );
}

There’s quite a lot going on in the above code snippet, and I will explain it all in due course, but first, I’ll need to cover a particularly tricky part of the configuration.

Column IDs

You’ll notice the filters parameters, measure, and category content all require a column id.

In the filters parameters, the key is named column_id, and in the measure and category, the key is named column. Both of these are actually the column IDs from the dataset. And here’s how you can find them.

Back in the Luzmo dashboard, click into your dataset and look for the “more dots” next to each column heading. From the menu, select Copy column id. Add each column ID to the keys in the configuration objects.

In my example, I’m using the Total users for the measure, the Device category for the category, and the Date for the filter.

If you’ve added the column IDs correctly, you should be able to see a rendered chart on your screen!

… and as promised, here’s a breakdown of the configuration.

Initial Props Donut chart

The first part is fairly straightforward. appServer and authKey are the environment variables you saved to your .env file, and authKey and authToken are destructured from the authorization request and passed into this component via props.

The type prop determines which type of chart to render. In my example, I’m using donut-chart, but you could choose from one of the many options available, area-chart, bar-chart, bubble-chart, box-plot, and many more. You can see all the available options in the Luzmo documentation under Chart docs.

<LuzmoVizItemComponent
  appServer={process.env.NEXT_PUBLIC_LUZMO_APP_SERVER}
  apiHost={process.env.NEXT_PUBLIC_LUZMO_API_HOST}
  authKey={authKey}
  authToken={authToken}
  type='donut-chart'

The one thing I should point out is my use of Tailwind classes: w-1/2 (width: 50%) and h-80 (height: 20rem). The LuzmoVizItemComponent ships with height 100%, so you’ll need to wrap the component with an element that has an actual height, or you won’t be able to see the chart on the page as it could be 100% of the height of an element with no height.

Donut Chart Options

The options object is where you can customize the appearance of your chart. It accepts many configuration options, among which:

  • A title for the chart that accepts a locale with corresponding text to display.
  • A display title value to determine if the title is shown or not.
  • A mode to determine if the chart is to be of type donut or pie chart.
  • A legend option to determine where the legend can be positioned.

All the available configuration options can be seen in the Donut chart documentation.

options={{
  title: {
    en: `Devices from last 7 days`,
  },
  display: {
    title: true,
  },
  mode: 'donut',
  legend: {
    position: 'bottom',
  },
}}

Donut Chart Slots

Slots are where you can configure which column from your dataset to use for the category and measure.

Slots can contain multiple measures, useful for displaying two columns of data per chart, but if more than two are used, one will become the measure.

Each measure contains a content array. The content array, among many other configurations, can include the following:

  • A label and locale,
  • The column id from the dataset,
  • The datasetId,
  • The type of data you’re displaying,
  • A format for the data.

The format used here is Python syntax for floating-point numbers; it’s similar to JavaScript’s .toFixed() method, e.g number.toFixed(4).

The hierarchy type is ​​the Luzmo standard data type. Any text column is considered as an hierarchical data type.

You can read more in the Donut chart documentation about available configuration options for slots.

slots={[
  {
    name: 'measure',
    content: [
      {
        label: {
          en: 'Total users',
        },
        column: '<column id>', // Total users
        set: datasetId,
        type: 'numeric',
        format: '.4f',
      },
    ],
  },
  {
    name: 'category',
    content: [
      {
        label: {
          en: 'Device category',
        },
        column: '<column id>', // Device category
        set: datasetId,
        type: 'hierarchy',
      },
    ],
  },
]}

Donut Chart Filters

The filters object is where you can apply conditions that will determine which data will be shown. In my example, I only want to show data from the last seven days. To accomplish this, I first create the date variable:

const date = new Date(new Date().getTime() - 7 * 24 * 60 * 60 * 1000).toISOString();

This would produce an ISO date string, e.g., 2024-08-21T14:25:40.088Z, which I can use with the filter. The filter uses Luzmo’s Filter Expressions, to determine if the date for each row of the data is greater than or equal to the date variable. You can read more about Filter Expressions in Luzmo’s Academy article.

filters={[
  {
    condition: 'or',
    filters: [
      {
        expression: '? >= ?',
        parameters: [
          {
            column_id: '<column id>', // Date
            dataset_id: datasetId,
          },
          date,
        ],
      },
    ],
  },
]}
Building a Line Chart

The second chart you’ll be creating is a Line chart that displays the number of page views on each date from the last seven days from folks who visit your site.

Initial Props Line Chart

As with the Donut chart, the initial props are pretty much the same, but the type has been changed to line-chart.

<LuzmoVizItemComponent
  appServer={process.env.NEXT_PUBLIC_LUZMO_APP_SERVER}
  apiHost={process.env.NEXT_PUBLIC_LUZMO_API_HOST}
  authKey={authKey}
  authToken={authToken}
  type='line-chart'

Line Chart Options

The options for the Line chart are as follows, and the mode has been changed to line-chart.

options={{
  title: {
    en: `Site visits from last 7 days`,
  },
  display: {
    title: true,
  },
  mode: 'grouped',
}}

Line Chart Slots

The slots object is almost the same as before with the Donut chart, but for the Line chart, I’m using the date column from the dataset instead of the device category, and instead of category, I’m using the x-axis slot type. To ensure I’m formatting the data correctly (by day), I’ve used level 5. You can read more about levels in the docs.

slots={[
  {
    name: 'measure',
    content: [
      {
        label: {
          en: 'Total users',
        },
        column: '<column id>', // Total users
        set: datasetId,
        type: 'numeric',
        format: '.4f',
      },
    ],
  },
  {
    name: 'x-axis',
    content: [
      {
        label: {
          en: 'Date',
        },
        column: '<column id>', // Date
        set: datasetId,
        type: 'datetime',
        level: 5,
      },
    ],
  },
]}

Line Chart Filters

I’ve used the same filters as I used in the Donut chart.

Building a Bar Chart

The last chart you’ll be creating is a Bar chart that displays the number of page views for the top ten most viewed pages on your site.

Initial Props Bar Chart

As with the Donut and Line chart, the initial props are pretty much the same, but the type has been changed to bar-chart.

<LuzmoVizItemComponent
  className='w-full h-80'
  appServer={process.env.NEXT_PUBLIC_LUZMO_APP_SERVER}
  apiHost={process.env.NEXT_PUBLIC_LUZMO_API_HOST}
  authKey={authKey}
  authToken={authToken}
  type='bar-chart'

Bar Chart Options

The options for the Bar chart are a little more involved. I’ve included some styling options for the border-radii of the bars, limited the number of results to 10, and sorted the data by the highest page view count first using the sort by measure and direction options.

options={{
  title: {
    en: `Page views from last 7 days`,
  },
  display: {
    title: true,
  },
  mode: 'grouped',
  bars: {
    roundedCorners: 5,
  },
  limit: {
    number: 10,
  },
  sort: {
    by: 'measure',
    direction: 'desc',
  },
}}

Line Chart Slots

As with the Line chart, I’ve used an axis for one of the columns from the dataset. In this case, it’s the y-axis which displays the page title.

slots={[
  {
    name: 'measure',
    content: [
      {
        label: {
          en: 'Total users',
        },
        column: '<column id>', // Total users
        set: datasetId,
        type: 'numeric',
        format: '.4f',
      },
    ],
  },
  {
    name: 'y-axis',
    content: [
      {
        label: {
          en: 'Page title',
        },
        column: '<column id>', // Page title
        set: datasetId,
        type: 'hierarchy',
      },
    ],
  },
]}

Bar Chart Filters

I’ve used the same filters as I used in the Donut and Line chart.

What’s Next

As you can see, there are plenty of types of charts and customization options. Because this is just an “ordinary” React component, you can very easily make it configurable by an end user by allowing options to be set and unset using HTML input elements, checkbox, select, date, and so on.

But for me, the real power behind this is not having to mutate data!

This is particularly pertinent when displaying multiple charts with different reporting dimensions. Typically, this would require each to have their own utility function or reformatting method. That said, setting column IDs and dataset IDs is a little fiddly, but once you have the component hooked up to the dataset, you can configure and reconfigure as much as you like, all without having to rewrite data formatting functions.

If you’re interested in bringing data to life in your application and want to get it done without the usual headaches, book a free demo with the Luzmo team to learn more!

]]>
hello@smashingmagazine.com (Paul Scanlon)
<![CDATA[Why Anticipatory Design Isn’t Working For Businesses]]> https://smashingmagazine.com/2024/09/why-anticipatory-design-not-working-businesses/ https://smashingmagazine.com/2024/09/why-anticipatory-design-not-working-businesses/ Tue, 10 Sep 2024 10:00:00 GMT Consider the early days of the internet, when websites like NBC News and Amazon cluttered their pages with flashing banners and labyrinthine menus. In the early 2000s, Steve Krug’s book Don’t Make Me Think arrived like a lighthouse in a storm, advocating for simplicity and user-centric design.

Today’s digital world is flooded with choices, information, and data, which is both exciting and overwhelming. Unlike Krug’s time,

Today, the problem isn’t interaction complexity but opacity. AI-powered solutions often lack transparency and explainability, raising concerns about user trust and accountability.

The era of click-and-command is fading, giving way to a more seamless and intelligent relationship between humans and machines.

Expanding on Krug’s Call for Clarity: The Pillars of Anticipatory Design

Krug’s emphasis on clarity in design is more relevant than ever. In anticipatory design, clarity is not just about simplicity or ease of use — it’s about transparency and accountability. These two pillars are crucial but often missing as businesses navigate this new paradigm. Users today find themselves in a digital landscape that is not only confusing but increasingly intrusive. AI predicts their desires based on past behavior but rarely explains how these predictions are made, leading to growing mistrust.

Transparency is the foundation of clarity. It involves openly communicating how AI-driven decisions are made, what data is being collected, and how it is being used to anticipate needs. By demystifying these processes, designers can alleviate user concerns about privacy and control, thereby building trust.

Accountability complements transparency by ensuring that anticipatory systems are designed with ethical considerations in mind. This means creating mechanisms for users to understand, question, and override automated decisions if needed. When users feel that the system is accountable to them, their trust in the technology — and the brand — deepens.

What Makes a Service Anticipatory?

Image AI like a waiter at a restaurant. Without AI, they wait for you to interact with them and place your order. But with anticipatory design powered by AI and ML, the waiter can analyze your past orders (historical data) and current behavior (contextual data) — perhaps, by noticing you always start with a glass of sparkling water.

This proactive approach has evolved since the late 1990s, with early examples like Amazon’s recommendation engine and TiVo’s predictive recording. These pioneering services demonstrated the potential of predictive analytics and ML to create personalized, seamless user experiences.

Amazon’s Recommendation Engine (Late 1990s)

Amazon was a pioneer in using data to predict and suggest products to customers, setting the standard for personalized experiences in e-commerce.

TiVo (1999)

TiVo’s ability to learn users’ viewing habits and automatically record shows marked an early step toward predictive, personalized entertainment.

Netflix’s Recommendation System (2006)

Netflix began offering personalized movie recommendations based on user ratings and viewing history in 2006. It helped popularize the idea of anticipatory design in the digital entertainment space.

How Businesses Can Achieve Anticipatory Design

Designing for anticipation is designing for a future that is not here yet but has already started moving toward us.

Designing for anticipation involves more than reacting to current trends; it requires businesses to plan strategically for future user needs. Two critical concepts in this process are forecasting and backcasting.

  • Forecasting analyzes past trends and data to predict future outcomes, helping businesses anticipate user needs.
  • Backcasting starts with a desired future outcome and works backward to identify the steps needed to achieve that goal.

Think of it like planning a dream vacation. Forecasting would involve looking at your past trips to guess where you might go next. But backcasting lets you pick your ideal destination first, then plan the perfect itinerary to get you there.

Forecasting: A Core Concept for Future-Oriented Design

This method helps in planning and decision-making based on probable future scenarios. Consider Netflix, which uses forecasting to analyze viewers’ past viewing habits and predict what they might want to watch next. By leveraging data from millions of users, Netflix can anticipate individual preferences and serve personalized recommendations that keep users engaged and satisfied.

Backcasting: Planning From the Desired Future

Backcasting takes a different approach. Instead of using data to predict the future, it starts with defining a desired future outcome — a clear user intent. The process then works backward to identify the steps needed to achieve that goal. This goal-oriented approach crafts an experience that actively guides users toward their desired future state.

For instance, a financial planning app might start with a user’s long-term financial goal, such as saving for retirement, and then design an experience that guides the user through each step necessary to reach that goal, from budgeting tips to investment recommendations.

Integrating Forecasting and Backcasting In Anticipatory Design

The true power of anticipatory design emerges when businesses efficiently integrate both forecasting and backcasting into their design processes.

For example, Tesla’s approach to electric vehicles exemplifies this integration. By forecasting market trends and user preferences, Tesla can introduce features that appeal to users today. Simultaneously, by backcasting from a vision of a sustainable future, Tesla designs its vehicles and infrastructure to guide society toward a world where electric cars are the norm and carbon emissions are significantly reduced.

Over-Promising and Under-Delivering: The Pitfalls of Anticipatory Design

As businesses increasingly adopt anticipatory design, the integration of forecasting and backcasting becomes essential. Forecasting allows businesses to predict and respond to immediate user needs, while backcasting ensures these responses align with long-term goals. Despite its potential, anticipatory design often fails in execution, leaving few examples of success.

Over the past decade, I’ve observed and documented the rise and fall of several ambitious anticipatory design ventures. Among them, three — Digit, LifeBEAM Vi Sense Headphones, and Mint — highlight the challenges of this approach.

Digit: Struggling with Contextual Understanding

Digit aimed to simplify personal finance with algorithms that automatically saved money based on user spending. However, the service often missed the mark, lacking the contextual awareness necessary to accurately assess users’ real-time financial situations. This led to unexpected withdrawals, frustrating users, especially those living paycheck to paycheck. The result was a breakdown in trust, with the service feeling more intrusive than supportive.

LifeBEAM Vi Sense Headphones: Complexity and User Experience Challenges

LifeBEAM Vi Sense Headphones was marketed as an AI-driven fitness coach, promising personalized guidance during workouts. In practice, the AI struggled to deliver tailored coaching, offering generic and unresponsive advice. As a result, users found the experience difficult to navigate, ultimately limiting the product’s appeal and effectiveness. This disconnection between the promised personalized experience and the actual user experience left many disappointed.

Mint: Misalignment with User Goals

Mint aimed to empower users to manage their finances by providing automated budgeting tools and financial advice. While the service had the potential to anticipate user needs, users often found that the suggestions were not tailored to their unique financial situations, resulting in generic advice that did not align with their personal goals.

The lack of personalized, actionable steps led to a mismatch between user expectations and service delivery. This misalignment caused some users to disengage, feeling that Mint was not fully attuned to their unique financial journeys.

The Risks of Over-promising and Under-Delivering

The stories of Digit, LifeBEAM Vi Sense, and Mint underscore a common pitfall: over-promising and under-delivering. These services focused too much on predictive power and not enough on user experience. When anticipatory systems fail to consider individual nuances, they breed frustration rather than satisfaction, highlighting the importance of aligning design with human experience.

Digit’s approach to automated savings, for instance, became problematic when users found its decisions opaque and unpredictable. Similarly, LifeBEAM’s Vi Sense headphones struggled to meet diverse user needs, while Mint’s rigid tools failed to offer the personalized insights users expected. These examples illustrate the delicate balance anticipatory design must strike between proactive assistance and user control.

Failure to Evolve with User Needs

Many anticipatory services rely heavily on data-driven forecasting, but predictions can fall short without understanding the broader user context. Mint initially provided value with basic budgeting tools but failed to evolve with users’ growing needs for more sophisticated financial advice. Digit, too, struggled to adapt to different financial habits, leading to dissatisfaction and limited success.

Complexity and Usability Issues

Balancing the complexity of predictive systems with usability and transparency is a key challenge in anticipatory design.

When systems become overly complex, as seen with LifeBEAM Vi Sense headphones, users may find them difficult to navigate or control, compromising trust and engagement. Mint’s generic recommendations, born from a failure to align immediate user needs with long-term goals, further illustrate the risks of complexity without clarity.

Privacy and Trust Issues

Trust is critical in anticipatory design, particularly in services handling sensitive data like finance or health. Digit and Mint both encountered trust issues as users grew skeptical of how decisions were made and whether these services truly had their best interests in mind. Without clear communication and control, even the most sophisticated systems risk alienating users.

Inadequate Handling of Edge Cases and Unpredictable Scenarios

While forecasting and backcasting work well for common scenarios, they can struggle with edge cases or unpredictable user behaviors. If an anticipatory service can’t handle these effectively, it risks providing a poor user experience and, in the worst-case scenario, harming the user. Anticipatory systems must be prepared to handle edge cases and unpredictable scenarios.

LifeBEAM Vi Sense headphones struggled when users deviated from expected fitness routines, offering a one-size-fits-all experience that failed to adapt to individual needs. This highlights the importance of allowing users control, even when a system proactively assists them.

Designing for Anticipatory Experiences
Anticipatory design should empower users to achieve their goals, not just automate tasks.

We can follow a layered approach to plan a service that can evolve according to user actions and explicit ever-evolving intent.

But how do we design for intent without misaligning anticipation and user control or mismatching user expectations and service delivery?

At the core of this approach is intent — the primary purpose or goal that the design must achieve. Surrounding this are workflows, which represent the structured tasks to achieve the intent. Finally, algorithms analyze user data and optimize these workflows.

For instance, Thrive (see the image below), a digital wellness platform, aligns algorithms and workflows with the core intent of improving well-being. By anticipating user needs and offering personalized programs, Thrive helps users achieve sustained behavior change.

It perfectly exemplifies the three-layered concentric representation for achieving behavior change through anticipatory design:

1. Innermost layer: Intent

Improve overall well-being: Thrive’s core intent is to help users achieve a healthier and more fulfilling life. This encompasses aspects like managing stress, improving sleep quality, and boosting energy levels.

2. Middle layer: Workflows

Personalized programs and support: Thrive uses user data (sleep patterns, activity levels, mood) to create programs tailored to their specific needs and goals. These programs involve various workflows, such as:

  • Guided meditations and breathing exercises to manage stress and anxiety.
  • Personalized sleep routines aimed at improving sleep quality.
  • Educational content and coaching tips to promote healthy habits and lifestyle changes.

3. Outermost layer: Algorithms

Data analysis and personalized recommendations: Thrive utilizes algorithms to analyze user data and generate actionable insights. These algorithms perform tasks like the following:

  • Identify patterns in sleep, activity, and mood to understand user challenges.
  • Predict user behavior to recommend interventions that address potential issues.
  • Optimize program recommendations based on user progress and data analysis.

By aligning algorithms and workflows with the core intent of improving well-being, Thrive provides a personalized and proactive approach to behavior change. Here’s how it benefits users:

  • Sustained behavior change: Personalized programs and ongoing support empower users to develop healthy habits for the long term.
  • Data-driven insights: User data analysis helps users gain valuable insights into their well-being and identify areas for improvement.
  • Proactive support: Anticipates potential issues and recommends interventions before problems arise.
The Future of Anticipatory Design: Combining Anticipation with Foresight

Anticipatory design is inherently future-oriented, making it both appealing and challenging. To succeed, businesses must combine anticipation — predicting future needs — with foresight, a systematic approach to analyzing and preparing for future changes.

Foresight involves considering alternative future scenarios and making informed decisions to navigate toward desired outcomes. For example, Digit and Mint struggled because they didn’t adequately handle edge cases or unpredictable scenarios, a failure in their foresight strategy (see an image below).

As mentioned, while forecasting and backcasting work well for common scenarios, they can struggle with edge cases or unpredictable user behaviors. Under anticipatory design, if we demote foresight for a second plan, the business will fail to account for and prepare for emerging trends and disruptive changes. Strategic foresight helps companies to prepare for the future and develop strategies to address possible challenges and opportunities.

The Foresight process generally involves interrelated activities, including data research, trend analysis, planning scenarios, and impact assessment. The ultimate goal is to gain a broader and deeper understanding of the future to make more informed and strategic decisions in the design process and foresee possible frictions and pitfalls in the user experience.

Actionable Insights for Designer

  • Enhance contextual awareness
    Help data scientists or engineers to ensure that the anticipatory systems can understand and respond to the full context of user needs, not just historical data. Plan for pitfalls so you can design safety measures where the user can control the system.
  • Maintain user control
    Provide users with options to customize or override automated decisions, ensuring they feel in control of their experiences.
  • Align short-term predictions with long-term goals
    Use forecasting and backcasting to create a balanced approach that meets immediate needs while guiding users toward their long-term objectives.
Proposing an Anticipatory Design Framework

Predicting the future is no easy task. However, design can borrow foresight techniques to imagine, anticipate, and shape a future where technology seamlessly integrates with users evolving needs. To effectively implement anticipatory design, it’s essential to balance human control with AI automation. Here’s a 3-step approach to integrate future thinking into your workflow:

  1. Anticipate Directions of Change
    Identify major trends shaping the future.
  2. Imagine Alternative Scenarios
    Explore potential futures to guide impactful design decisions.
  3. Shape Our Choices
    Leverage these scenarios to align design with user needs and long-term goals.

This proposed framework (see an image above) aims to integrate forecasting and backcasting while emphasizing user intent, transparency, and continuous improvement, ensuring that businesses create experiences that are both predictive and deeply aligned with user needs.

Step 1: Anticipate Directions of Change

Objective: Identify the major trends and forces shaping the future landscape.

Components:

1. Understand the User’s Intent

  • User Research: Conduct in-depth user research through interviews, surveys, and observations to uncover user goals, motivations, pain points, and long-term aspirations or Jobs-to-be-Done (JTBD). This foundational step helps clearly define the user’s intent.
  • Persona Development: Develop detailed user personas that represent the target audience, including their long-term goals and desired outcomes. Prioritize understanding how the service can adapt in real-time to changing user needs, offering recommendations, or taking actions aligned with the persona’s current context.

2. Forecasting: Predicting Near-Term User Needs

  • Data Collection and Analysis: Collaborate closely with data scientists and data engineers to analyze historical data (past interactions), user behavior, and external factors. This collaboration ensures that predictive analytics enhance overall user experience, allowing designers to better understand the implications of data on user behaviors.
  • Predictive Modeling: Implement continuous learning algorithms that refine predictions over time. Regularly assess how these models evolve, adapting to users’ changing needs and circumstances.
  • Explore the Delphi Method: This is a structured communication technique that gathers expert opinions to reach a consensus on future developments. It’s particularly useful for exploring complex issues with uncertain outcomes. Use the Delphi Method to gather insights from industry experts, user researchers, and stakeholders about future user needs and the best strategies to meet those needs. The consensus achieved can help in clearly defining the long-term goals and desired outcomes.

Activities:

  • Conduct interviews and workshops with experts using the Delphi Method to validate key trends.
  • Analyze data and trends to forecast future directions.

Step 2: Imagine Alternative Scenarios

Objective: Explore a range of potential futures based on these changing directions.

Components:

1. Scenario Planning

  • Scenario Development: It involves creating detailed, plausible future scenarios based on various external factors, such as technological advancements, social trends, and economic changes. Develop multiple future scenarios that represent different possible user contexts and their impact on their needs.
  • Scenario Analysis: From these scenarios, you can outline the long-term goals that users might have in each scenario and design services that anticipate and address these needs. Assess how these scenarios impact user needs and experiences.

2. Backcasting: Designing from the Desired Future

  • Define Desired Outcomes: Clearly outline the long-term goals or future states that users aim to achieve. Use backcasting to reduce cognitive load by designing a service that anticipates future needs, streamlining user interactions, and minimizing decision-making efforts.
    • Use Visioning Planning: This is a creative process that involves imagining the ideal future state you want to achieve. It helps in setting clear, long-term goals by focusing on the desired outcomes rather than current constraints. Facilitate workshops or brainstorming sessions with stakeholders to co-create a vision of the future. Define what success looks like from the user’s perspective and use this vision to guide the backcasting process.
  • Identify Steps to Reach Goals: Reverse-engineer the user journey by starting from the desired future state and working backward. Identify the necessary steps and milestones and ensure these are communicated transparently to users, allowing them control over their experience.
  • Create Roadmaps: Develop detailed roadmaps that outline the sequence of actions needed to transition from the current state to the desired future state. These roadmaps should anticipate obstacles, respect privacy, and avoid manipulative behaviors, empowering users rather than overwhelming them.

Activities:

  • Develop and analyze alternative scenarios to explore various potential futures.
  • Use backcasting to create actionable roadmaps from these scenarios, ensuring they align with long-term goals.

Step 3: Shape Our Choices

Objective: Leverage these scenarios to spark new ideas and guide impactful design decisions.

Components:

1. Integrate into the Human-Centered Design Process

  • Iterative Design with Forecasting and Backcasting: Embed insights from forecasting and backcasting into every stage of the design process. Use these insights to inform user research, prototype development, and usability testing, ensuring that solutions address both predicted future needs and desired outcomes. Continuously refine designs based on user feedback.
  • Agile Methodologies: Adopt agile development practices to remain flexible and responsive. Ensure that the service continuously learns from user interactions and feedback, refining its predictions and improving its ability to anticipate needs.

2. Implement and Monitor: Ensuring Ongoing Relevance

  • User Feedback Loops: Establish continuous feedback mechanisms to refine predictive models and workflows. Use this feedback to adjust forecasts and backcasted plans as necessary, keeping the design aligned with evolving user expectations.
  • Automation Tools: Collaborate with data scientists and engineers to deploy automation tools that execute workflows and monitor progress toward goals. These tools should adapt based on new data, evolving alongside user behavior and emerging trends.
  • Performance Metrics: Define key performance indicators (KPIs) to measure the effectiveness, accuracy, and quality of the anticipatory experience. Regularly review these metrics to ensure that the system remains aligned with intended outcomes.
  • Continuous Improvement: Maintain a cycle of continuous improvement where the system learns from each interaction, refining its predictions and recommendations over time to stay relevant and useful.
    • Use Trend Analysis: This involves identifying and analyzing patterns in data over time to predict future developments. This method helps you understand the direction in which user behaviors, technologies, and market conditions are heading. Use trend analysis to identify emerging trends that could influence user needs in the future. This will inform the desired outcomes by highlighting what users might require or expect from a service as these trends evolve.

Activities:

  • Implement design solutions based on scenario insights and iterate based on user feedback.
  • Regularly review and adjust designs using performance metrics and continuous improvement practices.
Conclusion: Navigating the Future of Anticipatory Design

Anticipatory design holds immense potential to revolutionize user experiences by predicting and fulfilling needs before they are even articulated. However, as seen in the examples discussed, the gap between expectation and execution can lead to user dissatisfaction and erode trust.

To navigate the future of anticipatory design successfully, businesses must prioritize transparency, accountability, and user empowerment. By enhancing contextual awareness, maintaining user control, and aligning short-term predictions with long-term goals, companies can create experiences that are not only innovative but also deeply resonant with their users’ needs.

Moreover, combining anticipation with foresight allows businesses to prepare for a range of future scenarios, ensuring that their designs remain relevant and effective even as circumstances change. The proposed 3-step framework — anticipating directions of change, imagining alternative scenarios, and shaping our choices — provides a practical roadmap for integrating these principles into the design process.

As we move forward, the challenge will be to balance the power of AI with the human need for clarity, control, and trust. By doing so, businesses can fulfill the promise of anticipatory design, creating products and services that are not only efficient and personalized but also ethical and user-centric.

In the end,

The success of anticipatory design will depend on its ability to enhance, rather than replace, the human experience.

It is a tool to empower users, not to dictate their choices. When done right, anticipatory design can lead to a future where technology seamlessly integrates with our lives, making everyday experiences simpler, more intuitive, and ultimately more satisfying.

]]>
hello@smashingmagazine.com (Joana Cerejo)
<![CDATA[How To Create A Weekly Google Analytics Report That Posts To Slack]]> https://smashingmagazine.com/2024/09/how-create-weekly-google-analytics-report-posts-slack/ https://smashingmagazine.com/2024/09/how-create-weekly-google-analytics-report-posts-slack/ Fri, 06 Sep 2024 17:00:00 GMT Google Analytics is great, but not everyone in your organization will be granted access. In many places I’ve worked, it was on a kind of “need to know” basis.

In this article, I’m gonna flip that on its head and show you how I wrote a GitHub Action that queries Google Analytics, generates a top ten list of the most frequently viewed pages on my site from the last seven days and compares them to the previous seven days to tell me which pages have increased in views, which pages have decreased in views, which pages have stayed the same and which pages are new to the list.

The report is then nicely formatted with icon indicators and posted to a public Slack channel every Friday at 10 AM.

Not only would this surfaced data be useful for folks who might need it, but it also provides an easy way to copy and paste or screenshot the report and add it to a slide for the weekly company/department meeting.

Here’s what the finished report looks like in Slack, and below, you’ll find a link to the GitHub Repository.

GitHub

To use this repository, follow the steps outlined in the README.

Prerequisites

To build this workflow, you’ll need admin access to your Google Analytics and Slack Accounts and administrator privileges for GitHub Actions and Secrets for a GitHub repository.

Customizing the Report and Action

Naturally, all of the code can be changed to suit your requirements, and in the following sections, I’ll explain the areas you’ll likely want to take a look at.

Customizing the GitHub Action

The file name of the Action weekly-analytics.report.yml isn’t seen anywhere other than in the code/repo but naturally, change it to whatever you like, you won’t break anything.

The name and jobs: names detailed below are seen in the GitHub UI and Workflow logs.

The cron syntax determines when the Action will run. Schedules use POSIX cron syntax and by changing the numbers you can determine when the Action runs.

You could also change the secrets variable names; just make sure you update them in your repository Settings.

# .github/workflows/weekly-analytics-report.yml

name: Weekly Analytics Report

on:
  schedule:
    - cron: '0 10 * * 5' # Runs every Friday at 10 AM UTC
  workflow_dispatch: # Allows manual triggering

jobs:
  analytics-report:
    runs-on: ubuntu-latest

    env:
      SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
      GA4_PROPERTY_ID: ${{ secrets.GA4_PROPERTY_ID }}
      GOOGLE_APPLICATION_CREDENTIALS_BASE64: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS_BASE64 }}

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20.x'

      - name: Install dependencies
        run: npm install

      - name: Run the JavaScript script
        run: node src/services/weekly-analytics.js

Customizing the Google Analytics Report

The Google Analytics API request I’m using is set to pull the fullPageUrl and pageTitle for the totalUsers in the last seven days, and a second request for the previous seven days, and then aggregates the totals and limits the responses to 10.

You can use Google’s GA4 Query Explorer to construct your own query, then replace the requests.

// src/services/weekly-analytics.js#L75

const [thisWeek] = await analyticsDataClient.runReport({
  property: `properties/${process.env.GA4_PROPERTY_ID}`,
  dateRanges: [
    {
      startDate: '7daysAgo',
      endDate: 'today',
    },
  ],
  dimensions: [
    {
      name: 'fullPageUrl',
    },
    {
      name: 'pageTitle',
    },
  ],
  metrics: [
    {
      name: 'totalUsers',
    },
  ],
  limit: reportLimit,
  metricAggregations: ['MAXIMUM'],
});
Creating the Comparisons

There are two functions to determine which page views have increased, decreased, stayed the same, or are new.

The first is a simple reduce function that returns the URL and a count for each.

const lastWeekMap = lastWeekResults.reduce((items, item) => {
  const { url, count } = item;
  items[url] = count;
  return items;
}, {});

The second maps over the results from this week and compares them to last week.

// Generate the report for this week
const report = thisWeekResults.map((item, index) => {
  const { url, title, count } = item;
  const lastWeekCount = lastWeekMap[url];
  const status = determineStatus(count, lastWeekCount);

  return {
    position: (index + 1).toString().padStart(2, '0'), // Format the position with leading zero if it's less than 10
    url,
    title,
    count: { thisWeek: count, lastWeek: lastWeekCount || '0' }, // Ensure lastWeekCount is displayed as '0' if not found
    status,
  };
});

The final function is used to determine the status of each.

// Function to determine the status
const determineStatus = (count, lastWeekCount) => {
  const thisCount = Number(count);
  const previousCount = Number(lastWeekCount);

  if (lastWeekCount === undefined || lastWeekCount === '0') {
    return NEW;
  }

  if (thisCount > previousCount) {
    return HIGHER;
  }

  if (thisCount < previousCount) {
    return LOWER;
  }

  return SAME;
};

I’ve purposely left the code fairly verbose, so it’ll be easier for you to add console.log to each of the functions to see what they return.

Customizing the Slack Message

The Slack message config I’m using creates a heading with an emoji, a divider, and a paragraph explaining what the message is.

Below that I’m using the context object to construct a report by iterating over comparisons and returning an object containing Slack specific message syntax which includes an icon, a count, the name of the page and a link to each item.

You can use Slack’s Block Kit Builder to construct your own message format.

// src/services/weekly-analytics.js#151 

    const slackList = report.map((item, index) => {
      const {
        position,
        url,
        title,
        count: { thisWeek, lastWeek },
        status,
      } = item;

      return {
        type: 'context',
        elements: [
          {
            type: 'image',
            image_url: ${reportConfig.url}/images/${status},
            alt_text: 'icon',
          },
          {
            type: 'mrkdwn',
            text: ${position}.  &lt;${url}|${title}&gt; | &#42;\${x${thisWeek}}`* / x${lastWeek}`,
          },
        ],
      };
    });

Before you can run the GitHub Action, you will need to complete a number of Google, Slack, and GitHub steps.

Ready to get going?

Creating a Google Cloud Project

Head over to your Google Cloud console, and from the dropdown menu at the top of the screen, click Select a project, and when the modal opens up, click NEW PROJECT.

Project name

On the next screen, give your project a name and click CREATE. In my example, I’ve named the project smashing-weekly-analytics.

Enable APIs & Services

In this step, you’ll enable the Google Analytics Data API for your new project. From the left-hand sidebar, navigate to APIs & Services > Enable APIs & services. At the top of the screen, click + ENABLE APIS & SERVICES.

Enable Google Analytics Data API

Search for “Google analytics data API,” select it from the list, then click ENABLE.

Create Credentials for Google Analytics Data API

With the API enabled in your project, you can now create the required credentials. Click the CREATE CREDENTIALS button at the top right of the screen to set up a new Service account.

A Service account allows an “application” to interact with Google APIs, providing the credentials include the required services. In this example, the credentials grant access to the Google Analytics Data API.

Service Account Credentials Type

On the next screen, select Google Analytics Data API from the dropdown menu and Application data, then click NEXT.

Service Account Details

On the next screen, give your Service account a name, ID, and description (optional). Then click CREATE AND CONTINUE.

In my example, I’ve given my service account a name and ID of smashing-weekly-analytics and added a short description that explains what the service account does.

Service Account Role

On the next screen, select Owner for the Role, then click CONTINUE.

Service Account Done

You can leave the fields blank in this last step and click DONE when you’re ready.

Service Account Keys

From the left-hand navigation, select Service Accounts, then click the “more dots” to open the menu and select Manage keys.

Service Accounts Add Key

On the next screen, locate the KEYS tab at the top of the screen, then click ADD KEY and select Create new key.

Service Accounts Download Keys

On the next screen, select JSON as the key type, then click CREATE to download your Google Application credentials .json file.

Google Application Credentials

If you open the .json file in your code editor, you should be looking at something similar to the one below.

In case you’re wondering, no, you can’t use an object as a variable defined in an .env file. To use these credentials, it’s necessary to convert the whole file into a base64 string.

Note: I wrote a more detailed post about how to use Google Application credentials as environment variables here: “How to Use Google Application .json Credentials in Environment Variables.”

From your terminal, run the following: replace name-of-creds-file.json with the name of your .json file.

cat name-of-creds-file.json | base64

If you’ve already cloned the repo and followed the Getting started steps in the README, add the base64 string returned after running the above and add it to the GOOGLE_APPLICATION_CREDENTIALS_BASE64 variable in your .env file, but make sure you wrap the string with double quotation makes.

GOOGLE_APPLICATION_CREDENTIALS_BASE64="abc123"

That completes the Google project side of things. The next step is to add your service account email to your Google Analytics property and find your Google Analytics Property ID.

Google Analytics Properties

Whilst your service account now has access to the Google Analytics Data API, it doesn’t yet have access to your Google Analytics account.

Get Google Analytics Property ID

To make queries to the Google Analytics API, you’ll need to know your Property ID. You can find it by heading over to your Google Analytics account. Make sure you’re on the correct property (in the screenshot below, I’ve selected paulie.dev — GA4).

Click the admin cog in the bottom left-hand side of the screen, then click Property details.

On the next screen, you’ll see the PROPERTY ID in the top right corner. If you’ve already cloned the repo and followed the Getting started steps in the README, add the property ID value to the GA4_PROPERTY_ID variable in your .env file.

Add Client Email to Google Analytics

From the Google application credential .json file you downloaded earlier, locate the client_email and copy the email address.

In my example, it looks like this: smashing-weekly-analytics@smashing-weekly-analytics.iam.gserviceaccount.com.

Now navigate to Property access management from the left hide side navigation and click the + in the top right-hand corner, then click Add users.

On the next screen, add the client_email to the Email addresses input, uncheck Notify new users by email, and select Viewer under Direct roles and data restrictions, then click Add.

That completes the Google Analytics properties section. Your “application” will use the Google application credentials containing the client_email and will now have access to your Google Analytics account via the Google Analytics Data API.

Slack Channels and Webhook

In the following steps, you’ll create a new Slack channel that will be used to post messages sent from your “application” using a Slack Webhook.

Creating The Slack Channel

Create a new channel in your Slack workspace. I’ve named mine #weekly-analytics-report. You’ll need to set this up before proceeding to the next step.

Creating a Slack App

Head over to the slack api dashboard, and click Create an App.

On the next screen, select From an app manifest.

On the next screen, select your Slack workspace, then click Next.

On this screen, you can give your app a name. In my example, I’ve named my Weekly Analytics Report. Click Next when you’re ready.

On step 3, you can just click Done.

With the App created, you can now set up a Webhook.

Creating a Slack Webhook

Navigate to Incoming Webhooks from the left-hand navigation, then switch the Toggle to On to activate incoming webhooks. Then, at the bottom of the screen, click Add New Webook to Workspace.

On the next screen, select your Slack workspace and a channel that you’d like to use to post messages, too, and click Allow.

You should now see your new Slack Webhook with a copy button. Copy the Webhook URL, and if you’ve already cloned the repo and followed the Getting started steps in the README, add the Webhook URL to the SLACK_WEBHOOK_URL variable in your .env file.

Slack App Configuration

From the left-hand navigation, select Basic Information. On this screen, you can customize your app and add an icon and description. Be sure to click Save Changes when you’re done.

If you now head over to your Slack, you should see that your app has been added to your workspace.

That completes the Slack section of this article. It’s now time to add your environment variables to GitHub Secrets and run the workflow.

Add GitHub Secrets

Head over to the Settings tab of your GitHub repository, then from the left-hand navigation, select Secrets and variables, then click Actions.

Add the three variables from your .env file under Repository secrets.

A note on the base64 string: You won’t need to include the double quotes!

Run Workflow

To test if your Action is working correctly, head over to the Actions tab of your GitHub repository, select the Job name (Weekly Analytics Report), then click Run workflow.

If everything worked correctly, you should now be looking at a nicely formatted list of the top ten page views on your site in Slack.

Finished

And that’s it! A fully automated Google Analytics report that posts directly to your Slack. I’ve worked in a few places where Google Analytics data was on lockdown, and I think this approach to sharing Analytics data with Slack (something everyone has access to) could be super valuable for various people in your organization.

]]>
hello@smashingmagazine.com (Paul Scanlon)
<![CDATA[Sticky Headers And Full-Height Elements: A Tricky Combination]]> https://smashingmagazine.com/2024/09/sticky-headers-full-height-elements-tricky-combination/ https://smashingmagazine.com/2024/09/sticky-headers-full-height-elements-tricky-combination/ Thu, 05 Sep 2024 09:00:00 GMT I was recently asked by a student to help with a seemingly simple problem. She’d been working on a website for a coffee shop that sports a sticky header, and she wanted the hero section right underneath that header to span the rest of the available vertical space in the viewport.

Here’s a visual demo of the desired effect for clarity.

Looks like it should be easy enough, right? I was sure (read: overconfident) that the problem would only take a couple of minutes to solve, only to find it was a much deeper well than I’d assumed.

Before we dive in, let’s take a quick look at the initial markup and CSS to see what we’re working with:

<body>
  <header class="header">Header Content</header>
  <section class="hero">Hero Content</section>
  <main class="main">Main Content</main>
</body>
.header {
  position: sticky;
  top: 0; /* Offset, otherwise it won't stick! */
}

/* etc. */

With those declarations, the .header will stick to the top of the page. And yet the .hero element below it remains intrinsically sized. This is what we want to change.

The Low-Hanging Fruit

The first impulse you might have, as I did, is to enclose the header and hero in some sort of parent container and give that container 100vh to make it span the viewport. After that, we could use Flexbox to distribute the children and make the hero grow to fill the remaining space.

<body>
  <div class="container">
    <header class="header">Header Content</header>
    <section class="hero">Hero Content</section>
  </div>
  <main class="main">Main Content</main>
</body>
.container {
  height: 100vh;
  display: flex;
  flex-direction: column;
}

.hero {
  flex-grow: 1;
}

/* etc. */

This looks correct at first glance, but watch what happens when scrolling past the hero.

See the Pen Attempt #1: Container + Flexbox [forked] by Philip.

The sticky header gets trapped in its parent container! But.. why?

If you’re anything like me, this behavior is unintuitive, at least initially. You may have heard that sticky is a combination of relative and fixed positioning, meaning it participates in the normal flow of the document but only until it hits the edges of its scrolling container, at which point it becomes fixed. While viewing sticky as a combination of other values can be a useful mnemonic, it fails to capture one important difference between sticky and fixed elements:

A position: fixed element doesn’t care about the parent it’s nested in or any of its ancestors. It will break out of the normal flow of the document and place itself directly offset from the viewport, as though glued in place a certain distance from the edge of the screen.

Conversely, a position: sticky element will be pushed along with the edges of the viewport (or next closest scrolling container), but it will never escape the boundaries of its direct parent. Well, at least if you don’t count visually transform-ing it. So a better way to think about it might be, to steal from Chris Coyier, that “position: sticky is, in a sense, a locally scoped position: fixed.” This is an intentional design decision, one that allows for section-specific sticky headers like the ones made famous by alphabetical lists in mobile interfaces.

See the Pen Sticky Section Headers [forked] by Philip.

Okay, so this approach is a no-go for our predicament. We need to find a solution that doesn’t involve a container around the header.

Fixed, But Not Solved

Maybe we can make our lives a bit simpler. Instead of a container, what if we gave the .header element a fixed height of, say, 150px? Then, all we have to do is define the .hero element’s height as height: calc(100vh - 150px).

See the Pen Attempt #2: Fixed Height + Calc() [forked] by Philip.

This approach kinda works, but the downsides are more insidious than our last attempt because they may not be immediately apparent. You probably noticed that the header is too tall, and we’d wanna do some math to decide on a better height.

Thinking ahead a bit,

  • What if the .header’s children need to wrap or rearrange themselves at different screen sizes or grow to maintain legibility on mobile?
  • What if JavaScript is manipulating the contents?

All of these things could subtly change the .header’s ideal size, and chasing the right height values for each scenario has the potential to spiral into a maintenance nightmare of unmanageable breakpoints and magic numbers — especially if we consider this needs to be done not only for the .header but also the .hero element that depends on it.

I would argue that this workaround also just feels wrong. Fixed heights break one of the main affordances of CSS layout — the way elements automatically grow and shrink to adapt to their contents — and not relying on this usually makes our lives harder, not simpler.

So, we’re left with…

A Novel Approach

Now that we’ve figured out the constraints we’re working with, another way to phrase the problem is that we want the .header and .hero to collectively span 100vh without sizing the elements explicitly or wrapping them in a container. Ideally, we’d find something that already is 100vh and align them to that. This is where it dawned on me that display: grid may provide just what we need!

Let’s try this: We declare display: grid on the body element and add another element before the .header that we’ll call .above-the-fold-spacer. This new element gets a height of 100vh and spans the grid’s entire width. Next, we’ll tell our spacer that it should take up two grid rows and we’ll anchor it to the top of the page.

This element must be entirely empty because we don’t ever want it to be visible or to register to screen readers. We’re merely using it as a crutch to tell the grid how to behave.

<body>
  <!-- This spacer provides the height we want -->
  <div class="above-the-fold-spacer"></div>

  <!-- These two elements will place themselves on top of the spacer -->
  <header class="header">Header Content</header>
  <section class="hero">Hero Content</section>

  <!-- The rest of the page stays unaffected -->
  <main class="main">Main Content</main>
</body>
body {
  display: grid;
}

.above-the-fold-spacer {
  height: 100vh;
  /* Span from the first to the last grid column line */
  /* (Negative numbers count from the end of the grid) */
  grid-column: 1 / -1;
  /* Start at the first grid row line, and take up 2 rows */
  grid-row: 1 / span 2; 
}

/* etc. */

This is the magic ingredient.

By adding the spacer, we’ve created two grid rows that together take up exactly 100vh. Now, all that’s left to do, in essence, is to tell the .header and .hero elements to align themselves to those existing rows. We do have to tell them to start at the same grid column line as the .above-the-fold-spacer element so that they won’t try to sit next to it. But with that done… ta-da!

See the Pen The Solution: Grid Alignment [forked] by Philip.

The reason this works is that a grid container can have multiple children occupying the same cell overlaid on top of each other. In a situation like that, the tallest child element defines the grid row’s overall height — or, in this case, the combined height of the two rows (100vh).

To control how exactly the two visible elements divvy up the available space between themselves, we can use the grid-template-rows property. I made it so that the first row uses min-content rather than 1fr. This is necessary so that the .header doesn’t take up the same amount of space as the .hero but instead only takes what it needs and lets the hero have the rest.

Here’s our full solution:


body {
  display: grid;
  grid-template-rows: min-content 1fr;
}

.above-the-fold-spacer {
  height: 100vh;
  grid-column: 1 / -1;
  grid-row: 1 / span 2;
}

.header {
  position: sticky;
  top: 0;
  grid-column-start: 1;
  grid-row-start: 1;
}

.hero {
  grid-column-start: 1;
  grid-row-start: 2;
}

And voila: A sticky header of arbitrary size above a hero that grows to fill the remaining visible space!

Caveats and Final Thoughts

It’s worth noting that the HTML order of the elements matters here. If we define .above-the-fold-spacer after our .hero section, it will overlay and block access to the elements underneath. We can work around this by declaring either order: -1, z-index: -1, or visibility: hidden.

Keep in mind that this is a simple example. If you were to add a sidebar to the left of your page, for example, you’d need to adjust at which column the elements start. Still, in the majority of cases, using a CSS Grid approach is likely to be less troublesome than the Sisyphean task of manually managing and coordinating the height values of multiple elements.

Another upside of this approach is that it’s adaptable. If you decide you want a group of three elements to take up the screen’s height rather than two, then you’d make the invisible spacer span three rows and assign the visible elements to the appropriate one. Even if the hero element’s content causes its height to exceed 100vh, the grid adapts without breaking anything. It’s even well-supported in all modern browsers.

The more I think about this technique, the more I’m persuaded that it’s actually quite clean. Then again, you know how lawyers can talk themselves into their own arguments? If you can think of an even simpler solution I’ve overlooked, feel free to reach out and let me know!

]]>
hello@smashingmagazine.com (Philip Braunen)
<![CDATA[The Big Difference Between Digital Product And Web Design]]> https://smashingmagazine.com/2024/09/big-difference-between-digital-product-and-web-design/ https://smashingmagazine.com/2024/09/big-difference-between-digital-product-and-web-design/ Tue, 03 Sep 2024 09:00:00 GMT In the early days of the web, I remember how annoying it was when print designers would claim they could design websites, too. They assumed that just because they could design for one medium, they could design for the other.

That assumption often led to bad user experiences. The skills for effective web design are quite different from those for print design.

A similar thing happens today. Designers know how to design traditional marketing and e-commerce sites. They, therefore, presume they have the skills to work on SaaS apps and other digital projects.

But when it comes to design, there’s a big distinction between traditional websites and digital products. If we want to work on digital products, we need to understand those differences and adopt a different approach to our work.

People Interact with Digital Products More Regularly

The biggest difference is that users interact with digital products more than most websites.

Think about your own web use. What are the sites you visit most often? If you listed your top ten, well over half would be some form of digital product, from a social media application to a productivity tool.

So, with that in mind, let’s dive into the specifics of how the frequency of usage impacts our design approach and what we can do about it.

Why Frequency of Use Matters So Much

The more we interact with a web app or website, the more important the overall user experience becomes. Users develop deeper connections with digital products. They also form more complex mental models of products they use often. This changes how they see the app in two fundamental ways.

Friction Becomes Significantly More Irritating

First, friction points become increasingly annoying. Users interact with a digital product many times per day. Any small problem in the interface compounds quickly.

When you encounter a clunky UI or confusing workflow on a website you only visit once in a while, it’s frustrating but easy to overlook. But, when that same friction occurs in an app you use multiple times per day, it becomes a major source of irritation.

Change Undermines Our Procedural Knowledge

Second, the more we use an app, the more familiar we become with it and how it works. We end up using the app automatically, without even thinking, much like when you’ve been driving a car for years, you don’t think about the process. This is known as procedural knowledge.

This is great news for digital product designers, as it means we can create interfaces that become second nature to our users. But, if we break their mental models or introduce unexpected changes, we risk causing frustration and disruption.

So, knowing these two things, how does this affect our approach to digital product design? Well, let’s start by considering the problem of friction.

Fixing Friction Points

As digital product designers, we need to become obsessed with removing friction from the user experience. Failure to do so will alienate users over time and ultimately lead to churn.

To mitigate friction, we need to constantly seek out friction points. We need to diagnose the exact problem and then test any solution to ensure it does, in fact, make things better.

So, how exactly do we find friction points?

Finding Friction

The most obvious way is to listen to customers. User feedback is crucial in identifying friction points in the user experience. However, we can’t simply rely on that. Analytics can be your friend, too.

Microsoft Clarity offers essential insights to pinpoint issues on your app.

I would highly recommend using a tool like Microsoft Clarity. It gives detailed insights into user behavior. They help find points of friction. These include the following:

  • Rage clicks: Where individuals continuously click on something due to frustration.
  • Dead clicks: Where people click on something that is not clickable.
  • Excessive scrolling: Where users scroll up and down looking for something.
  • Quick backs: Where a person accidentally lands on a screen and promptly navigates back to the previous one.
  • Error messages: Where the user is triggering an error in the system.

These will help you identify potential friction points that you can then investigate further.

Diagnosing Friction

Once you know where things are going wrong, you can use heat maps and session recordings in Clarity. They will help you understand the problem. Why are people excessively scrolling or rage-clicking, for example?

Session recordings are valuable for pinpointing particular problems in the interface.

If the heat maps or session recordings don’t make things clear, that is where you would need to consider usability testing.

Once you understand the problem, you can then begin exploring solutions and testing them rigorously to ensure they effectively reduce friction.

Testing Your Friction Busting Solutions

How you test your solution to the point of friction will depend on the size and complexity of the changes you need to make.

For small changes, such as tweaking the UI or changing some text, A/B testing is often the best approach. You show the new solution to a subset of your users and measure the impact on those indicators of frustration.

But A/B testing isn’t always the right approach. If your app has lower levels of traffic, getting results from a statistically significant A/B test can be time-consuming.

Also, when your solution involves big changes, like adding new features or redesigning many screens, A/B testing can be expensive. That is because you need to first fully develop the solution before you can test it, meaning that it can prove costly if that solution turns out not to work.

Your best approach in such situations is to create a prototype for remote testing.

Initially, I usually conduct unfacilitated testing with a tool such as Maze. Unfacilitated testing is easy to set up. It requires minimal time investment, and Maze offers analytics, so you don’t necessarily need to watch every session back.

Maze serves as a valuable resource for conducting remote testing, offering both test data and recordings for each test.

If testing uncovers issues you can’t fix, then try facilitated testing. Facilitated testing enables you to delve into any arising issues by asking questions.

Once you have a solution that works, it’s time to roll that feature out. But you need to be careful at this point because of the procedural knowledge I mentioned earlier.

Dealing With the Dangers of Procedural Knowledge

Introducing fixes to a user interface has a good chance of breaking a user’s procedural knowledge. Interface elements are often moved and so are no longer where users expect to find them, or they look different, and so users miss them.

This can upset many existing customers. That can panic stakeholders and lead to rash decisions.

To some extent, you need to accept that this is inevitable and prepare stakeholders for this eventuality. Users will normally adapt in a couple of weeks of regular use, and so there is no immediate need to panic.

That said, there are things you can do to mitigate the reaction.

  1. To start with, you can let people know that change is coming. This allows people to mentally adapt to the change before it occurs.
  2. Secondly, if the change is significant, you may wish to give people the ability to opt out of it, at least in the short term. That is why some apps roll out features in beta and give users the option to opt in or out. This provides a sense of control that reduces people’s reactions.
  3. Finally, you can also provide guidance within the user interface itself. Tooltips and overlays can show users where features have been moved so new interface elements can be highlighted.

Slack use tooltips to explain how their interface works.

The key is to strike a balance. You must add needed improvements while causing minimal disruption to users’ workflows. You will also need to carefully monitor adoption and adapt accordingly.

Change The Way We Work

That constant monitoring and adaptation lies at the heart of digital product design. You cannot rely solely on the initial solution but must be prepared to continuously refine and iterate as user behavior and needs evolve.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Goodbye Summer, Hello September (2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/08/desktop-wallpaper-calendars-september-2024/ https://smashingmagazine.com/2024/08/desktop-wallpaper-calendars-september-2024/ Sat, 31 Aug 2024 08:00:00 GMT Lush green slowly turning into yellows and reds in the Northern hemisphere; nature reawakening in the Southern part of the world: September is a time of change. A chance to leave old habits behind and embrace the beginning of something new. And, well, sometimes it only takes a small change in routines to spark fresh inspiration and, who knows, maybe even great ideas.

With that in mind, we started our monthly wallpapers series more than 13 years ago, and from the very beginning to today, artists and designers from across the globe have submitted their designs to it to cater for a bit of variety on your screens every month. Of course, it wasn’t any different this time around.

In this post, you’ll find their wallpaper designs for September 2024. All of them come in versions with and without a calendar and can be downloaded for free. As a little bonus goodie, we also added some favorites from past years’ September editions to the collection. So maybe you’ll spot one of your almost-forgotten favorites in here, too? A huge thank-you to everyone who shared their wallpapers with us this month — this post wouldn’t exist without you!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

National Elephant Appreciation Day

“Today, we celebrate these magnificent creatures who play such a vital role in our ecosystems and cultures. Elephants are symbols of wisdom, strength, and loyalty. Their social bonds are strong, and their playful nature, especially in the young ones, reminds us of the importance of joy and connection in our lives.” — Designed by PopArt Studio from Serbia.

Summer In Costa Rica

“We continue in tropical climates. In this case, we travel to Costa Rica to observe the Arenal volcano from the lake while we use a kayak.” — Designed by Veronica Valenzuela from Spain.

Pigman And Robin

Designed by Ricardo Gimenes from Spain.

A Mind Of Their Own

“My eyes have a mind of their own: they see what they want to see…” — Designed by Bhabna Basak from India.

More Bananas

Designed by Ricardo Gimenes from Spain.

Quality Education For All

“Our team takes pride in aligning our volunteer initiatives with the 2030 Agenda for Sustainable Development’s ‘Quality Education’ goal. This goal reflects a global commitment to ensure inclusive and equitable quality education and promote lifelong learning opportunities for all. We encourage our team members to volunteer with non-profits they care about year-round. Explore local opportunities and use your skills to make a meaningful impact!” — Designed by Jenna Finberg from Portland, OR.

Green Jewellery

“I was thinking about African bead necklaces when making this wallpaper. I chose green and warm colors, because summer has not ended in the north — let’s enjoy it.” — Designed by Philippe Brouard from France.

Discover, Dream, Travel!

“Celebrate World Tourism Day by exploring new destinations and cultures around the globe!” — Designed by Reethu M from London.

Happy Labor Day

“I wanted my design to revolve around the themes of unity, hard work, and patriotism to honor the workforce that builds a great nation. The flags, the skyline, and the human figure outline evoke a sense of pride, appreciation, dedication, and solidarity.” — Designed by Cronix from the United States.

Autumn Rains

“This autumn, we expect to see a lot of rainy days and blues, so we wanted to change the paradigm and wish a warm welcome to the new season. After all, if you come to think of it: rain is not so bad if you have an umbrella and a raincoat. Come autumn, we welcome you!” — Designed by PopArt Studio from Serbia.

Terrazzo

“With the end of summer and fall coming soon, I created this terrazzo pattern wallpaper to brighten up your desktop. Enjoy the month!” — Designed by Melissa Bogemans from Belgium.

Funny Cats

“Cats are beautiful animals. They’re quiet, clean, and warm. They’re funny and can become an endless source of love and entertainment. Here for the cats!” — Designed by UrbanUI from India.

Cacti Everywhere

“Seasons come and go, but our brave cactuses still stand. Summer is almost over and autumn is coming, but the beloved plants don’t care.” — Designed by Lívia Lénárt from Hungary.

Summer Ending

“As summer comes to an end, all the creatures pull back to their hiding places, searching for warmth within themselves and dreaming of neverending adventures under the tinted sky of closing dog days.” — Designed by Ana Masnikosa from Belgrade, Serbia.

The Rebel

Designed by Ricardo Gimenes from Spain.

National Video Games Day Delight

“September 12th brings us National Video Games Day. US-based video game players love this day and celebrate with huge gaming tournaments. What was once a 2D experience in the home is now a global phenomenon with players playing against each other across statelines and national borders via the internet. National Video Games Day gives gamers the perfect chance to celebrate and socialize! So grab your controller, join online and let the games begin!” — Designed by Ever Increasing Circles from the United Kingdom.

Long Live Summer

“While September’s Autumnal Equinox technically signifies the end of the summer season, this wallpaper is for all those summer lovers, like me, who don’t want the sunshine, warm weather, and lazy days to end.” — Designed by Vicki Grunewald from Washington.

Flower Soul

“The earth has music for those who listen. Take a break and relax and while you drive out the stress, catch a glimpse of the beautiful nature around you. Can you hear the rhythm of the breeze blowing, the flowers singing, and the butterflies fluttering to cheer you up? We dedicate flowers which symbolize happiness and love to one and all.” — Designed by Krishnankutty from India.

Stay Or Leave?

Designed by Ricardo Gimenes from Spain.

Hungry

Designed by Elise Vanoorbeek from Belgium.

Rainy Flowers

Designed by Teodora Vasileva from Bulgaria.

Science Is Magic

“Science is like magic, except it’s real.” — Designed by Bhabna Basak from India.

Listen Closer… The Mushrooms Are Growing

“It’s this time of the year when children go to school and grown-ups go to collect mushrooms.” — Designed by Igor Izhik from Canada.

Batmom

Designed by Ricardo Gimenes from Spain.

Wine Harvest Season

“Welcome to the wine harvest season in Serbia. It’s September, and the hazy sunshine bathes the vines on the slopes of Fruška Gora. Everything is ready for the making of Bermet, the most famous wine from Serbia. This spiced wine was a favorite of the Austro-Hungarian elite and was served even on the Titanic. Bermet’s recipe is a closely guarded secret, and the wine is produced by just a handful of families in the town of Sremski Karlovci, near Novi Sad. On the other side of Novi Sad, plains of corn and sunflower fields blend in with the horizon, catching the last warm sun rays of this year.” — Designed by PopArt Studio from Serbia.

Bear Time

Designed by Bojana Stojanovic from Serbia.

Maryland Pride

“As summer comes to a close, so does the end of blue crab season in Maryland. Blue crabs have been a regional delicacy since the 1700s and have become Maryland’s most valuable fishing industry, adding millions of dollars to the Maryland economy each year. The blue crab has contributed so much to the state’s regional culture and economy, in 1989 it was named the State Crustacean, cementing its importance in Maryland history.” — Designed by The Hannon Group from Washington DC.

Finding Jaguar

“Nature and our planet have given us life, enabled us to enjoy the most wonderful place known to us in the universe. People have given themselves the right to master something they do not fully understand. We dedicate this September calendar to a true nature lover, Vedran Badjun from Dalmatia, Croatia, who inspires us to love our planet, live in harmony with it and appreciate all that it has to offer. Amazon, Siberia, and every tree or animal on the planet are treasures we lose every day. Let’s change that!” — Designed by PopArt Studio from Serbia.

Penguin Family

“Penguins are sociable, independent and able to survive harsh winters. They work as a team to care for their offspring and I love that!” — Designed by Glynnis Owen from Australia.

Early Autumn

“September is usually considered as early autumn so I decided to draw some trees and leaves. However, nobody likes that summer is coming to an end, that’s why I kept summerish colors and style.” — Designed by Kat Gluszek from Germany.

Summer Is Leaving

“It is inevitable. Summer is leaving silently. Let us think of ways to make the most of what is left of the beloved season.” — Designed by Bootstrap Dashboards from India.

Lucha Libre

“This month is Mexico’s independence day and I decided to illustrate one of the things Mexico’s best known for: the Lucha Libre.” — Designed by Maria Keller from Mexico.

Geometric Autumn

“I designed this wallpaper to remind everyone that autumn is here.” — Designed by Advanced Web Ranking from Romania.

Never Stop Exploring

Designed by Ricardo Gimenes from Spain.

Still In Vacation Mood

“It’s officially the end of summer and I’m still in vacation mood, dreaming about all the amazing places I’ve seen. This illustration is inspired by a small town in France, on the Atlantic coast, right by the beach.” — Designed by Miruna Sfia from Romania.

Colors Of September

“I love September. Its colors and smells.” — Designed by Juliagav from Ukraine.

Office

“Clean, minimalistic office for a productive day.” — Designed by Antun Hiršman from Croatia.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Integrating Image-To-Text And Text-To-Speech Models (Part 2)]]> https://smashingmagazine.com/2024/08/integrating-image-to-text-and-text-to-speech-models-part2/ https://smashingmagazine.com/2024/08/integrating-image-to-text-and-text-to-speech-models-part2/ Fri, 30 Aug 2024 09:00:00 GMT the previous application that performs conversational analyses on images or videos, much like a chatbot assistant. This means you can ask and learn more about your input content.]]> In Part 1 of this brief two-part series, we developed an application that turns images into audio descriptions using vision-language and text-to-speech models. We combined an image-to-text that analyses and understands images, generating description, with a text-to-speech model to create an audio description, helping people with sight challenges. We also discussed how to choose the right model to fit your needs.

Now, we are taking things a step further. Instead of just providing audio descriptions, we are building that can have interactive conversations about images or videos. This is known as Conversational AI — a technology that lets users talk to systems much like chatbots, virtual assistants, or agents.

While the first iteration of the app was great, the output still lacked some details. For example, if you upload an image of a dog, the description might be something like “a dog sitting on a rock in front of a pool,” and the app might produce something close but miss additional details such as the dog’s breed, the time of the day, or location.

The aim here is simply to build a more advanced version of the previously built app so that it not only describes images but also provides more in-depth information and engages users in meaningful conversations about them.

We’ll use LLaVA, a model that combines understanding images and conversational capabilities. After building our tool, we’ll explore multimodal models that can handle images, videos, text, audio, and more, all at once to give you even more options and easiness for your applications.

Visual Instruction Tuning and LLaVA

We are going to look at visual instruction tuning and the multimodal capabilities of LLaVA. We’ll first explore how visual instruction tuning can enhance the large language models to understand and follow instructions that include visual information. After that, we’ll dive into LLaVA, which brings its own set of tools for image and video processing.

Visual Instruction Tuning

Visual instruction tuning is a technique that helps large language models (LLMs) understand and follow instructions based on visual inputs. This approach connects language and vision, enabling AI systems to understand and respond to human instructions that involve both text and images. For example, Visual IT enables a model to describe an image or answer questions about a scene in a photograph. This fine-tuning method makes the model more capable of handling these complex interactions effectively.

There’s a new training approach called LLaVAR that has been developed, and you can think of it as a tool for handling tasks related to PDFs, invoices, and text-heavy images. It’s pretty exciting, but we won’t dive into that since it is outside the scope of the app we’re making.

Examples of Visual Instruction Tuning Datasets

To build good models, you need good data — rubbish in, rubbish out. So, here are two datasets that you might want to use to train or evaluate your multimodal models. Of course, you can always add your own datasets to the two I’m going to mention.

Vision-CAIR

  • Instruction datasets: English;
  • Multi-task: Datasets containing multiple tasks;
  • Mixed dataset: Contains both human and machine-generated data.

Vision-CAIR provides a high-quality, well-aligned image-text dataset created using conversations between two bots. This dataset was initially introduced in a paper titled “MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models,” and it provides more detailed image descriptions and can be used with predefined instruction templates for image-instruction-answer fine-tuning.

There are more multimodal datasets out there, but these two should help you get started if you want to fine-tune your model.

Let’s Take a Closer Look At LLaVA

LLaVA (which stands for Large Language and Vision Assistant) is a groundbreaking multimodal model developed by researchers from the University of Wisconsin, Microsoft Research, and Columbia University. The researchers aimed to create a powerful, open-source model that could compete with the best in the field, just like GPT-4, Claude 3, or Gemini, to name a few. For developers like you and me, its open nature is a huge benefit, allowing for easy fine-tuning and integration.

One of LLaVA’s standout features is its ability to understand and respond to complex visual information, even with unfamiliar images and instructions. This is exactly what we need for our tool, as it goes beyond simple image descriptions to engage in meaningful conversations about the content.

Architecture

LLaVA’s strength lies in its smart use of existing models. Instead of starting from scratch, the researchers used two key models:

  • CLIP VIT-L/14
    This is an advanced version of the CLIP (Contrastive Language–Image Pre-training) model developed by OpenAI. CLIP learns visual concepts from natural language descriptions. It can handle any visual classification task by simply being given the names of the visual categories, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
  • Vicuna
    This is an open-source chatbot trained by fine-tuning LLaMA on 70,000 user-shared conversations collected from ShareGPT. Training Vicuna-13B costs around $300, and it performs exceptionally well, even when compared to other models like Alpaca.

These components make LLaVA highly effective by combining state-of-the-art visual and language understanding capabilities into a single powerful model, perfectly suited for applications requiring both visual and conversational AI.

Training

LLaVA’s training process involves two important stages, which together enhance its ability to understand user instructions, interpret visual and language content, and provide accurate responses. Let’s detail what happens in these two stages:

  1. Pre-training for Feature Alignment
    LLaVA ensures that its visual and language features are aligned. The goal here is to update the projection matrix, which acts as a bridge between the CLIP visual encoder and the Vicuna language model. This is done using a subset of the CC3M dataset, allowing the model to map input images and text to the same space. This step ensures that the language model can effectively understand the context from both visual and textual inputs.
  2. End-to-End Fine-Tuning
    The entire model undergoes fine-tuning. While the visual encoder’s weights remain fixed, the projection layer and the language model are adjusted.

The second stage is tailored to specific application scenarios:

  • Instructions-Based Fine-Tuning
    For general applications, the model is fine-tuned on a dataset designed for following instructions that involve both visual and textual inputs, making the model versatile for everyday tasks.
  • Scientific reasoning
    For more specialized applications, particularly in science, the model is fine-tuned on data that requires complex reasoning, helping the model excel at answering detailed scientific questions.

Now that we’re keen on what LLaVA is and the role it plays in our applications, let’s turn our attention to the next component we need for our work, Whisper.

Using Whisper For Text-To-Speech

In this chapter, we’ll check out Whisper, a great model for turning text into speech. Whisper is accurate and easy to use, making it perfect for adding natural-sounding voice responses to our app. We’ve used Whisper in a different article, but here, we’re going to use a new version — large v3. This updated version of the model offers even better performance and speed.

Whisper large-v3

Whisper was developed by OpenAI, which is the same folks behind ChatGPT. Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. The original Whisper was trained on 680,000 hours of labeled data.

Now, what’s different with Whisper large-v3 compared to other models? In my experience, it comes down to the following:

  • Better inputs
    Whisper large-v3 uses 128 Mel frequency bins instead of 80. Think of Mel frequency bins as a way to break down audio into manageable chunks for the model to process. More bins mean finer detail, which helps the model better understand the audio.
  • More training
    This specific Whisper version was trained on 1 million hours of weakly labeled audio and 4 million hours of pseudo-labeled audio that was collected from Whisper large-v2. From there, the model was trained for 2.0 epochs over this mix.

Whisper models come in different sizes, from tiny to large. Here’s a table comparing the differences and similarities:

Size Parameters English-only Multilingual
tiny 39 M
base 74 M
small 244 M
medium 769 M
large 1550 M
large-v2 1550 M
large-v3 1550 M
Integrating LLaVA With Our App

Alright, so we’re going with LLaVA for image inputs, and this time, we’re adding video inputs, too. This means the app can handle both images and videos, making it more versatile.

We’re also keeping the speech feature so you can hear the assistant’s replies, which makes the interaction even more engaging. How cool is that?

For this, we’ll use Whisper. We’ll stick with the Gradio framework for the app’s visual layout and user interface. You can, of course, always swap in other models or frameworks — the main goal is to get a working prototype.

Installing and Importing the Libraries

We will start by installing and importing all the required libraries. This includes the transformers libraries for loading the LLaVA and Whisper models, bitsandbytes for quantization, gtts, and moviepy to help in processing video files, including frame extraction.

#python
!pip install -q -U transformers==4.37.2
!pip install -q bitsandbytes==0.41.3 accelerate==0.25.0
!pip install -q git+https://github.com/openai/whisper.git
!pip install -q gradio
!pip install -q gTTS
!pip install -q moviepy

With these installed, we now need to import these libraries into our environment so we can use them. We’ll use colab for that:

#python
import torch
from transformers import BitsAndBytesConfig, pipeline
import whisper
import gradio as gr
from gtts import gTTS
from PIL import Image
import re
import os
import datetime
import locale
import numpy as np
import nltk
import moviepy.editor as mp

nltk.download('punkt')
from nltk import sent_tokenize

# Set up locale
os.environ["LANG"] = "en_US.UTF-8"
os.environ["LC_ALL"] = "en_US.UTF-8"
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')

Configuring Quantization and Loading the Models

Now, let’s set up a 4-bit quantization to make the LLaVA model more efficient in terms of performance and memory usage.

#python

# Configuration for quantization
quantization_config = BitsAndBytesConfig(
  load_in_4bit=True,
  bnb_4bit_compute_dtype=torch.float16
)

# Load the image-to-text model
model_id = "llava-hf/llava-1.5-7b-hf"
pipe = pipeline("image-to-text",
  model=model_id,
  model_kwargs={"quantization_config": quantization_config})

# Load the whisper model
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
model = whisper.load_model("large-v3", device=DEVICE)

In this code, we’ve configured the quantization to four bits, which reduces memory usage and improves performance. Then, we load the LLaVA model with these settings. Finally, we load the whisper model, selecting the device based on GPU availability for better performance.

Note: We’re using llava-v1.5-7b as the model. Please feel free to explore other versions of the model. For Whisper, we’re loading the “large” size, but you can also switch to another size like “medium” or “small” for your experiments.

To get our assistant up and running, we need to implement five essential functions:

  1. Handling conversations,
  2. Converting images to text,
  3. Converting videos to text,
  4. Transcribing audio,
  5. Converting text to speech.

Once these are in place, we will create another function to tie all this together seamlessly. The following sections provide the code that defines each function.

Conversation History

We’ll start by setting up the conversation history and a function to log it:

#python

# Initialize conversation history
conversation_history = []

def writehistory(text):
  """Write history to a log file."""
  tstamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
  logfile = f'{tstamp}_log.txt'
  with open(logfile, 'a', encoding='utf-8') as f:
    f.write(text + '\n')

Image to Text

Next, we’ll create a function to convert images to text using LLaVA and iterative prompts.

#python
def img2txt(input_text, input_image):
  """Convert image to text using iterative prompts."""
  try:
    image = Image.open(input_image)

    if isinstance(input_text, tuple):
      input_text = input_text[0]  # Take the first element if it's a tuple

      writehistory(f"Input text: {input_text}")
      prompt = "USER: <image>\n" + input_text + "\nASSISTANT:"
      while True:
        outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})

          if outputs and outputs[0]["generated_text"]:
            match = re.search(r'ASSISTANT:\s*(.*)', outputs[0]["generated_text"])
            reply = match.group(1) if match else "No response found."
            conversation_history.append(("User", input_text))
            conversation_history.append(("Assistant", reply))
            prompt = "USER: " + reply + "\nASSISTANT:"
            return reply  # Only return the first response for now
          else:
            return "No response generated."
  except Exception as e:
    return str(e)

Video to Text

We’ll now create a function to convert videos to text by extracting frames and analyzing them.

#python
def vid2txt(input_text, input_video):
  """Convert video to text by extracting frames and analyzing."""
  try:
    video = mp.VideoFileClip(input_video)
    frame = video.get_frame(1)  # Get a frame from the video at the 1-second mark
    image_path = "temp_frame.jpg"
    mp.ImageClip(frame).save_frame(image_path)
    return img2txt(input_text, image_path)
  except Exception as e:
    return str(e)

Audio Transcription

Let’s add a function to transcribe audio to text using Whisper.

#python
def transcribe(audio_path):
  """Transcribe audio to text using Whisper model."""
  if not audio_path:
    return ''

  audio = whisper.load_audio(audio_path)
  audio = whisper.pad_or_trim(audio)
  mel = whisper.log_mel_spectrogram(audio).to(model.device)
  options = whisper.DecodingOptions()
  result = whisper.decode(model, mel, options)
  return result.text

Text to Speech

Lastly, we create a function to convert text responses into speech.

#python
def text_to_speech(text, file_path):
  """Convert text to speech and save to file."""
  language = 'en'
  audioobj = gTTS(text=text, lang=language, slow=False)
  audioobj.save(file_path)
  return file_path

With all the necessary functions in place, we can create the main function that ties everything together:

#python

def chatbot_interface(audio_path, image_path, video_path, user_message):
  """Process user inputs and generate chatbot response."""
  global conversation_history

  # Handle audio input
  if audio_path:
    speech_to_text_output = transcribe(audio_path)
  else:
    speech_to_text_output = ""

  # Determine the input message
  input_message = user_message if user_message else speech_to_text_output

  # Ensure input_message is a string
  if isinstance(input_message, tuple):
    input_message = input_message[0]

  # Handle image or video input
  if image_path:
    chatgpt_output = img2txt(input_message, image_path)
  elif video_path:
      chatgpt_output = vid2txt(input_message, video_path)
  else:
    chatgpt_output = "No image or video provided."

  # Add to conversation history
  conversation_history.append(("User", input_message))
  conversation_history.append(("Assistant", chatgpt_output))

  # Generate audio response
  processed_audio_path = text_to_speech(chatgpt_output, "Temp3.mp3")

  return conversation_history, processed_audio_path

Using Gradio For The Interface

The final piece for us is to create the layout and user interface for the app. Again, we’re using Gradio to build that out for quick prototyping purposes.

#python

# Define Gradio interface
iface = gr.Interface(
  fn=chatbot_interface,
  inputs=[
    gr.Audio(type="filepath", label="Record your message"),
    gr.Image(type="filepath", label="Upload an image"),
    gr.Video(label="Upload a video"),
    gr.Textbox(lines=2, placeholder="Type your message here...", label="User message (if no audio)")
  ],
  outputs=[
    gr.Chatbot(label="Conversation"),
    gr.Audio(label="Assistant's Voice Reply")
  ],
  title="Interactive Visual and Voice Assistant",
  description="Upload an image or video, record or type your question, and get detailed responses."
)

# Launch the Gradio app
iface.launch(debug=True)

Here, we want to let users record or upload their audio prompts, type their questions if they prefer, upload videos, and, of course, have a conversation block.

Here’s a preview of how the app will look and work:

Looking Beyond LLaVA

LLaVA is a great model, but there are even greater ones that don’t require a separate ASR model to build a similar app. These are called multimodal or “any-to-any” models. They are designed to process and integrate information from multiple modalities, such as text, images, audio, and video. Instead of just combining vision and text, these models can do it all: image-to-text, video-to-text, text-to-speech, speech-to-text, text-to-video, and image-to-audio, just to name a few. It makes everything simpler and less of a hassle.

Examples of Multimodal Models that Handle Images, Text, Audio, and More

Now that we know what multimodal models are, let’s check out some cool examples. You may want to integrate these into your next personal project.

CoDi

So, the first on our list is CoDi or Composable Diffusion. This model is pretty versatile, not sticking to any one type of input or output. It can take in text, images, audio, and video and turn them into different forms of media. Imagine it as a sort of AI that’s not tied down by specific tasks but can handle a mix of data types seamlessly.

CoDi was developed by researchers from the University of North Carolina and Microsoft Azure. It uses something called Composable Diffusion to sync different types of data, like aligning audio perfectly with the video, and it can generate outputs that weren’t even in the original training data, making it super flexible and innovative.

ImageBind

Now, let’s talk about ImageBind, a model from Meta. This model is like a multitasking genius, capable of binding together data from six different modalities all at once: images, video, audio, text, depth, and even thermal data.

Source: Meta AI. (Large preview)

ImageBind doesn’t need explicit supervision to understand how these data types relate. It’s great for creating systems that use multiple types of data to enhance our understanding or create immersive experiences. For example, it could combine 3D sensor data with IMU data to design virtual worlds or enhance memory searches across different media types.

Gato

Gato is another fascinating model. It’s built to be a generalist agent that can handle a wide range of tasks using the same network. Whether it’s playing games, chatting, captioning images, or controlling a robot arm, Gato can do it all.

The key thing about Gato is its ability to switch between different types of tasks and outputs using the same model.

GPT-4o

The next on our list is GPT-4o; GPT-4o is a groundbreaking multimodal large language model (MLLM) developed by OpenAI. It can handle any mix of text, audio, image, and video inputs and give you text, audio, and image outputs. It’s super quick, responding to audio inputs in just 232ms to 320ms, almost like a real conversation.

There’s a smaller version of the model called GPT-4o Mini. Small models are becoming a trend, and this one shows that even small models can perform really well. Check out this evaluation to see how the small model stacks up against other large models.

Conclusion

We covered a lot in this article, from setting up LLaVA for handling both images and videos to incorporating Whisper large-v3 for top-notch speech recognition. We also explored the versatility of multimodal models like CoDi or GPT-4o, showcasing their potential to handle various data types and tasks. These models can make your app more robust and capable of handling a range of inputs and outputs seamlessly.

Which model are you planning to use for your next app? Let me know in the comments!

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[Generating Unique Random Numbers In JavaScript Using Sets]]> https://smashingmagazine.com/2024/08/generating-unique-random-numbers-javascript-using-sets/ https://smashingmagazine.com/2024/08/generating-unique-random-numbers-javascript-using-sets/ Mon, 26 Aug 2024 15:00:00 GMT JavaScript comes with a lot of built-in functions that allow you to carry out so many different operations. One of these built-in functions is the Math.random() method, which generates a random floating-point number that can then be manipulated into integers.

However, if you wish to generate a series of unique random numbers and create more random effects in your code, you will need to come up with a custom solution for yourself because the Math.random() method on its own cannot do that for you.

In this article, we’re going to be learning how to circumvent this issue and generate a series of unique random numbers using the Set object in JavaScript, which we can then use to create more randomized effects in our code.

Note: This article assumes that you know how to generate random numbers in JavaScript, as well as how to work with sets and arrays.

Generating a Unique Series of Random Numbers

One of the ways to generate a unique series of random numbers in JavaScript is by using Set objects. The reason why we’re making use of sets is because the elements of a set are unique. We can iteratively generate and insert random integers into sets until we get the number of integers we want.

And since sets do not allow duplicate elements, they are going to serve as a filter to remove all of the duplicate numbers that are generated and inserted into them so that we get a set of unique integers.

Here’s how we are going to approach the work:

  1. Create a Set object.
  2. Define how many random numbers to produce and what range of numbers to use.
  3. Generate each random number and immediately insert the numbers into the Set until the Set is filled with a certain number of them.

The following is a quick example of how the code comes together:

function generateRandomNumbers(count, min, max) {
  // 1: Create a Set object
  let uniqueNumbers = new Set();
  while (uniqueNumbers.size < count) {
    // 2: Generate each random number
    uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
  }
  // 3: Immediately insert them numbers into the Set...
  return Array.from(uniqueNumbers);
}
// ...set how many numbers to generate from a given range
console.log(generateRandomNumbers(5, 5, 10));

What the code does is create a new Set object and then generate and add the random numbers to the set until our desired number of integers has been included in the set. The reason why we’re returning an array is because they are easier to work with.

One thing to note, however, is that the number of integers you want to generate (represented by count in the code) should be less than the upper limit of your range plus one (represented by max + 1 in the code). Otherwise, the code will run forever. You can add an if statement to the code to ensure that this is always the case:

function generateRandomNumbers(count, min, max) {
  // if statement checks that count is less than max + 1
  if (count > max + 1) {
    return "count cannot be greater than the upper limit of range";
  } else {
    let uniqueNumbers = new Set();
    while (uniqueNumbers.size < count) {
      uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
    }
    return Array.from(uniqueNumbers);
  }
}
console.log(generateRandomNumbers(5, 5, 10));
Using the Series of Unique Random Numbers as Array Indexes

It is one thing to generate a series of random numbers. It’s another thing to use them.

Being able to use a series of random numbers with arrays unlocks so many possibilities: you can use them in shuffling playlists in a music app, randomly sampling data for analysis, or, as I did, shuffling the tiles in a memory game.

Let’s take the code from the last example and work off of it to return random letters of the alphabet. First, we’ll construct an array of letters:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// rest of code

Then we map the letters in the range of numbers:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);

In the original code, the generateRandomNumbers() function is logged to the console. This time, we’ll construct a new variable that calls the function so it can be consumed by randomAlphabets:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);

Now we can log the output to the console like we did before to see the results:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);
console.log(randomAlphabets);

And, when we put the generateRandomNumbers`()` function definition back in, we get the final code:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];
function generateRandomNumbers(count, min, max) {
  if (count > max + 1) {
    return "count cannot be greater than the upper limit of range";
  } else {
    let uniqueNumbers = new Set();
    while (uniqueNumbers.size < count) {
      uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
    }
    return Array.from(uniqueNumbers);
  }
}
const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);
console.log(randomAlphabets);

So, in this example, we created a new array of alphabets by randomly selecting some letters in our englishAlphabets array.

You can pass in a count argument of englishAlphabets.length to the generateRandomNumbers function if you desire to shuffle the elements in the englishAlphabets array instead. This is what I mean:

generateRandomNumbers(englishAlphabets.length, 0, 25);
Wrapping Up

In this article, we’ve discussed how to create randomization in JavaScript by covering how to generate a series of unique random numbers, how to use these random numbers as indexes for arrays, and also some practical applications of randomization.

The best way to learn anything in software development is by consuming content and reinforcing whatever knowledge you’ve gotten from that content by practicing. So, don’t stop here. Run the examples in this tutorial (if you haven’t done so), play around with them, come up with your own unique solutions, and also don’t forget to share your good work. Ciao!

]]>
hello@smashingmagazine.com (Amejimaobari Ollornwi)
<![CDATA[Mastering Typography In Logo Design]]> https://smashingmagazine.com/2024/08/mastering-typography-in-logo-design/ https://smashingmagazine.com/2024/08/mastering-typography-in-logo-design/ Thu, 22 Aug 2024 10:00:00 GMT Typography is much more than just text on a page — it forms the core of your design. As a designer, I always approach selecting types from two angles: as a creative adventure and as a technical challenge.

Choosing the right typeface for a company, product, or service is an immensely important task. At that moment, you’re not only aligning with the brand’s identity but also laying the foundation to reinforce the company or service’s brand. Finding the right typeface can be a time-consuming process that often begins with an endless search. During this search, you can get tangled up in the many different typefaces, which, over time, all start to look the same.

In this article, I aim to provide you with the essential background and tools to enhance your typography journey and apply this knowledge to your logo design. We will focus on three key pillars:

  1. Font Choice
  2. Font Weight
  3. Letter Spacing

We will travel back in time to uncover the origins of various typefaces. By exploring different categories, we will illustrate the distinctions with examples and describe the unique characteristics of each category.

Additionally, we will discuss the different font weights and offer advice on when to use each variant. We will delve into letter-spacing and kerning, explaining what they are and how to effectively apply them in your logo designs.

Finally, we will examine how the right typeface choices can significantly influence the impact and success of a brand. With this structured approach, I will show you how to create a logo that is not only expressive but also purposeful and well-thought-out.

Understanding Typography in Logo Design

From the invention of the Gutenberg press in the mid-15th century through the creation of the first Slab Serif in 1815 and the design of the first digital typeface in 1968, the number of available fonts has grown exponentially. Today, websites like WhatFontIs, a font finder platform, catalogs over a million fonts.

So, the one downside of not being born in the 15th century is that your task of choosing the right font has grown enormously. And once you’ve made the right choice out of a million-plus fonts, there are still many pitfalls to watch out for.

Fortunately for us, all these fonts have already been categorized. In this article, we refer to the following four categories: serif, sans serif, script, and display typefaces. But why do we have these categories, and how do we benefit from them today?

Each category has its specific uses. Serif typefaces are often used for books due to their enhancement of readability on paper, while sans serif typefaces are ideal for screens because of their clean lines. Different typefaces also evoke different emotions: for example, script can convey elegance, while sans serif offers a more modern look. Additionally, typeface categories have a rich history, with Old Style Serifs inspired by Roman inscriptions and Modern Serifs designed for greater contrast.

Today, these categories provide a fundamental basis for choosing the right typeface for any project.

As mentioned, different typefaces evoke different emotions; like people, they convey distinct characteristics:

  • Serif fonts are seen as traditional and trustworthy;
  • Sans Serif fonts are seen as modern and clear;
  • Script fonts can come across as elegant and/or informal depending on the style;
  • Display fonts are often bold and dynamic.

Historically, typefaces reflected cultural identities, but the “new typography” movement sought a universal style. Designers emphasized that typefaces should match the character of the text, a view also supported by the Bauhaus school.

Different Fonts And Their Characteristics

We have touched upon the history of different typeface categories. Now, to make a good font choice, we need to explore these categories and see what sets them apart, as each one has specific characteristics. In this article, we refer to the following four categories:

Let’s take a closer look at each category.

A serif typeface is a typeface that features small lines or decorative elements at the ends of the strokes. These small lines are called “serifs”.

A sans-serif typeface is a typeface that lacks the small lines or decorative elements at the ends of the strokes, resulting in a clean and modern appearance. The term “sans-serif” comes from the French word “sans,” meaning “without,” so sans-serif translates to “without serif.”

A script typeface is a typeface that mimics the fluid strokes of handwriting or calligraphy, featuring connected letters and flowing strokes for an elegant or artistic appearance.

A display typeface is a typeface designed for large sizes, such as headlines or titles, characterized by bold, decorative elements that make a striking visual impact.

Typeface Persona in Practice

Experts link typeface characteristics to physical traits. Sans serif faces are perceived as cleaner and more modern, while rounded serifs are friendly and squared serifs are more official. Light typefaces are seen as delicate and feminine, and heavy ones are seen as strong and masculine. Some typefaces are designed to be child-friendly with smoother shapes. Traditional serifs are often considered bookish, while sans serifs are seen as modern and no-nonsense.

Based on the provided context, we can assign the following characteristics per category:

  • Serif: Bookish, Traditional, Serious, Official, Respectable, Trustworthy.
  • Sans Serif: Clean, Modern, Technical, No-nonsense, Machine-like, Clear.
  • Script: Elegant, Informal, Feminine, Friendly, Flowing.
  • Display: Dramatic, Sophisticated, Urban, Theatrical, Bold, Dynamic.

Let me provide you with a real real-life logo example to help visualize how different typeface categories convey these characteristics.

We’re focusing on ING, a major bank headquartered in the Netherlands. Before we dive into the logo itself, let’s first zoom in on some brand values. On their website, it is stated that they “value integrity above all” and “will not ignore, tolerate, or excuse behavior that breaches our values. To do so would break the trust of society and the trust of the thousands of colleagues who do the right thing.”

Given the strong emphasis on integrity, trust, and adherence to values, the most suitable typeface category would likely be a serif.

The serif font in the ING logo conveys a sense of authority, professionalism, and experience associated with the brand.

Let’s choose a different font for the logo. The font used in the example is Poppins Bold, a geometric sans-serif typeface.

The sans-serif typeface in this version of the ING logo conveys modernity, simplicity, and accessibility. These are all great traits for a company to convey, but they align less with the brand’s chosen values of integrity, trust, and adherence to tradition. A serif typeface often represents these traits more effectively. While the sans-serif version of the logo may be more accessible and modern, it could also convey a sense of casualness that misaligns with the brand’s values.

So let’s see these traits in action with a game called “Assign the Trait.” The rules are simple: you are shown two different fonts, and you choose which font best represents the given trait.

Understanding these typeface personas is crucial when aligning typography with a company’s brand identity. The choice of typeface should reflect and reinforce the brand’s characteristics and values, ensuring a cohesive and impactful visual identity.

We covered a lot of ground, and I hope you now have a better understanding of different typeface categories and their characteristics. I also hope that the little game of “Assign the Trait” has given you a better grasp of the differences between them. This game would also be great to play while you’re walking your dog or going for a run. See a certain logo on the back of a lorry? Which typeface category does it belong to, and what traits does it convey?

Now, let’s further explore the importance of aligning the typeface with the brand identity.

Brand Identity and Consistency

The most important aspect when choosing a typeface is that it aligns with the company’s brand identity. We have reviewed various typeface options, and each has its unique characteristics. You can link these characteristics to those of the company.

As discussed in the previous section, a sans-serif is more “modern” and “no-nonsense”. So, for a modern company, a sleek sans-serif typeface often fits better than a classic Serif typeface. In the previous section, we examined the ING logo and how the use of a sans-serif typeface gave it a more modern appearance, but it also reduced the emphasis on certain traits that ING wants to convey with its brand.

To further illustrate the impact of typeface on logo design, let’s explore some more ‘extreme’ examples.

Our first ‘Extreme’ example is Haribo, which is an iconic gummy candy brand. They use a custom sans-serif typeface.

Let’s zoom in on a couple of characteristics of the typeface and explore why this is a great match for the brand.

  • Playfulness: The rounded, bold shapes give the logo a playful and child-friendly feel, aligning with its target audience of children and families.
  • Simplicity: The simple, easily readable sans-serif design makes it instantly recognizable and accessible.
  • Friendliness: The soft, rounded edges of the letters convey a sense of friendliness and positivity.

The second up is Fanta, a global soft drink brand that also uses a custom sans-serif typeface.

  • Handcrafted, Cut-Paper Aesthetic: The letters are crafted to appear as though they’ve been cut from paper, giving the typeface a distinct, hand-made look that adds warmth and creativity.
  • Expressive: The logo design is energetic and packed with personality, perfectly embodying Fanta’s fun, playful, and youthful vibe.

Using these ‘extreme’ cases, we can really see the power that a well-aligned typeface can have. Both cases embody the fun and friendly values of the brand. While the nuances may be more subtle in other cases, the power is still there.

Now, let’s delve deeper into the different typefaces and also look at weight, style, and letter spacing.

Elements of Typography in Logo Design

Now that we have a background of the different typeface categories, let’s zoom in on three other elements of typography in logo design:

Typefaces

Each category of typefaces has a multitude of options. The choice of the right typeface is crucial and perhaps the most important decision when designing a logo. It’s important to realize that often, there isn’t a single ‘best’ choice. To illustrate, we have four variations of the Adidas logo below. Each typeface could be considered a good choice. It’s crucial not to get fixated on finding the perfect typeface. Instead, ensure it aligns with the brand identity and looks good in practical use.

These four typefaces could arguably all be great choices for the Adidas brand, as they each possess the clean, bold, and sans-serif qualities that align with the brand’s values of innovation, courage, and ownership. While the details of typeface selection are important, it’s essential not to get overly fixated on them. The key is to ensure that the typeface resonates with the brand’s identity and communicates its core values effectively. Ultimately, the right typeface is one that not only looks good but also embodies the spirit and essence of the brand.

Let’s zoom in on the different weights and styles each typeface offers.

Weight and Style

Each typeface can range from 1 to more than 10 different styles, including choices such as Roman and Italic and various weights like Light, Regular, Semi-Bold, and Bold.

Personally, I often lean towards a Roman in Semi-Bold or Bold variant, but this choice heavily depends on the desired appearance, brand name, and brand identity. So, how do you know which font weight to choose?

When to choose bold fonts

  • Brand Identity
    If the brand is associated with strength, confidence, and modernity, bold fonts can effectively communicate these attributes.
  • Visibility and Readability
    Bold fonts are easy to read from a distance, making them perfect for signage, billboards, and other large formats.
  • Minimalist Design
    Using bold fonts in minimalist logos not only ensures that the logo stands out but also aligns with the principles of minimalism, where less is more.

Letter-spacing & Kerning

An important aspect of typography is overall word spacing, also known as tracking. This refers to the overall spacing between characters in a block of text. By adjusting the tracking in logo design, we can influence the overall look of the logo. We can make a logo more spacious and open or more compact and tight with minimal adjustments.

Designer and design educator Ellen Lupton states that kerning adjusts the spacing between individual characters in a typeface to ensure visual uniformity. When letters are spaced too uniformly, gaps can appear around certain letters like W, Y, V, T, and L. Modern digital typefaces use kerning pairs tables to control these spaces and create a more balanced look.

Tracking and kerning are often confused. To clarify, tracking (letter-spacing) adjusts the space between all letters uniformly, while kerning specifically involves adjusting the distance between individual pairs of letters to improve the readability and aesthetics of the text.

In the example shown below, we observe the concept of kerning in typography. The middle instance of “LEAF” displays the word without any kerning adjustments, where the spacing between each letter is uniform and unaltered.

In the first “LEAF,” kerning adjustments have been applied between the letters ‘A’ and ‘F’, reducing the space between them to create a more visually appealing and cohesive pair.

In the last “LEAF,” kerning has been applied differently, adjusting the space between ‘E’ and ‘A’. This alteration shifts the visual balance of the word, showing how kerning can change the aesthetics and readability of text (or logo) by fine-tuning the spacing between individual letter pairs.

Essential Techniques for Selecting Typefaces

Matching Typeface Characteristics with Brand Identity

As we discussed earlier, different categories of typefaces have unique characteristics that can align well with, or deviate from, the brand identity you want to convey. This is a great starting point on which to base your initial choice.

Inspiration

A large part of the creative process is seeking inspiration. Especially now that you’ve been able to make a choice regarding category, it’s interesting to see the different typefaces in action. This helps you visualize what does and doesn’t work for your brand. Below, I share a selection of my favorite inspiration sources:

Trust the Crowd

Some typefaces are used more frequently than others. Therefore, choosing typefaces that have been tried and tested over the years is a good starting point. It’s important to distinguish between a popular typeface and a trendy one. In this context, I refer to typefaces that have been “popular” for a long time. Let’s break down some of these typefaces.

Helvetica

One of the most well-known typefaces is Helvetica, renowned for its intrinsic legibility and clarity since its 1957 debut. Helvetica’s tall x-height, open counters, and neutral letterforms allow it to lend a clean and professional look to any logo.

Some well-known brands that use Helvetica are BMW, Lufthansa, and Nestlé.

Futura

Futura) has been helping brands convey their identity for almost a century. Designed in 1927, it is celebrated for its geometric simplicity and modernist design. Futura’s precise and clean lines give it a distinctive and timeless look.

Some well-known brands that use Futura are Louis Vuitton, Red Bull, and FedEx.

That said, you naturally have all the creative freedom, and making a bold choice can turn out fantastic, especially for brands where this is desirable.

Two’s Company, Three’s a Crowd

Combining typefaces is a challenging task. But if you want to create a logo with two different typefaces, make sure there is enough contrast between the two. For example, combine a serif with a sans-serif. If the two typefaces look too similar, it’s better to stick to one typeface. That said, I would never choose more than two typefaces for your logo.

Let’s Build a Brand Logo

Now that we’ve gone through the above steps, it seems a good time for a practical example. Theory is useful, but only when you put it into practice will you notice that you become more adept at it.

TIP: Try creating a text logo yourself. First, we’ll need to do a company briefing where we come up with a name, define various characteristics, and create a brand identity. This is a great way to get to know your fictional brand.

Bonus challenge: If you want to go one step further, you can also include a logo mark in the briefing. In the following steps, we are going to choose a typeface that suits the brand’s identity and characteristics. For an added challenge, include the logo mark at the start so the typeface has to match your logo mark as well. You can find great graphics at Iconfinder.

Company Briefing

Company Name: EcoWave

Characteristics:

  • Sustainable and eco-friendly products.
  • Innovative technologies focused on energy saving.
  • Wide range of ecological solutions.
  • Focus on quality and reliability.
  • Promotion of a green lifestyle.
  • Dedicated to addressing marine pollution.

Brand Identity: EcoWave is committed to a greener future. We provide sustainable and eco-friendly products that are essential for a better environment. Our advanced technologies and high-quality solutions enable customers to save energy and minimize their ecological footprint. EcoWave is more than just a brand; we represent a movement towards a more sustainable world with a special focus on combating marine pollution.

Keyword: Sustainability

Now that we’ve been briefed, we can start with the following steps:

  1. Identify key characteristics: Compile the top three defining characteristics of the company. You can add related words to each characteristic for more detail.
  2. Match the characteristics: Try to match these characteristics with the characteristics of the typeface category.
  3. Get inspired: Check the suggested links for inspiration and search for Sans-Serif fonts, for example. Look at popular fonts, but also search for fonts that fit what you want to convey about the brand (create a mood board).
  4. Make a preliminary choice: Use the gathered information to make an initial choice for the typeface. Adjust the weight and letter spacing until you are satisfied with the design of your logo.
  5. Evaluate your design: You now have the first version of your logo. Try it out on different backgrounds and photos that depict the desired look of the company. Assess whether it fits the intended identity and whether you are satisfied with the look. Not satisfied? Go back to your mood board and try a different typeface.

Let’s go over the steps for EcoWave:

1. Sustainable, Trustworthy, Innovative.

2. The briefing and brand focus primarily on innovation. When we match this aspect with the characteristics of typefaces, everything points to a Sans-Serif font, which offers a modern and innovative look.

3. Example Mood Board

4. Ultimately, I chose the IBM Plex Sans typeface. This modern, sans-serif typeface offers a fresh and contemporary look. It fits excellently with the innovative and sustainable characteristics of EcoWave. Below are the steps from the initial choice to the final result:

IBM Plex Sans Regular

IBM Plex Sans Bold

IBM Plex Sans Bold & Custom letter-spacing

IBM Plex Sans Bold & Custom edges

5. Here, you see the typeface in action. For me, this is a perfect match with the brand’s identity. The look feels just right.

Expert Insights and Trends in Typographic Logo Design

Those interested in typography might find ‘The Elements of Typographic Style’ by Robert Bringhurst insightful. In this section, I want to share an interesting part about the importance of choosing a typeface that suits the specific task.

“Choose faces that suit the task as well as the subject. You are designing, let us say, a book about bicycle racing. You have found in the specimen books a typeface called Bicycle, which has spokes in the O, an A in the shape of a racing seat, a T that resembles a set of racing handlebars, and tiny cleated shoes perched on the long, one-sided serifs of ascenders and descenders, like pumping feet on the pedals. Surely this is the perfect face for your book?

Actually, typefaces and racing bikes are very much alike. Both are ideas as well as machines, and neither should be burdened with excess drag or baggage. Pictures of pumping feet will not make the type go faster, any more than smoke trails, pictures of rocket ships, or imitation lightning bolts tied to the frame will improve the speed of the bike.

The best type for a book about bicycle racing will be, first of all, an inherently good type. Second, it will be a good type for books, which means a good type for comfortable long-distance reading. Third, it will be a type sympathetic to the theme. It will probably be lean, strong, and swift; perhaps it will also be Italian. But it is unlikely to be carrying excess ornament or freight and unlikely to be indulging in a masquerade.”

— Robert Bringhurst

As Robert Bringhurst illustrates, choosing a typeface should be appropriate not only for the subject but also for the specific task. What lessons can we draw from this for our typeface choice in our logo?

Functional and Aesthetic Considerations

The typeface must be legible in various sizes and on different mediums, from business cards to billboards. A well-designed logo should be easy to reproduce without loss of clarity.

Brand Identity

Suppose we have a brand in the bicycle industry, an innovative and modern company. In Robert Bringhurst’s example, we choose the typeface Bicycle, which, due to its name, seems to perfectly match bicycles. However, the typeface described by Robert is a serif font with many decorative elements, which does not align with the desired modern and innovative look of our brand. Therefore, this would be a mismatch.

Trends

“Styles come and go. Good design is a language, not a style.”

In this part, we discuss some new trends. However, it is also important to highlight the above quote. The basic principles we mention have been applicable for a long time and will continue to be. It can be both fun and challenging to follow the latest trends, but it is essential to integrate them with your basic principles.

Minimalism and Simplicity

Minimalism in Logo Design remains one of the major trends this year. The most characteristic aspect of this style is to limit the logo to the most essential elements. This creates a clear and timeless character. In typography, this is beneficial for readability and, at the same time, effectively communicating the brand identity in a timeless manner. We also see this well reflected in the rebranding of the fast-food chain Ashton.

Customization and Uniqueness

Another growing trend is customization in typography, where designers create personalized typefaces or modify existing typefaces to give the brand a unique look. This can range from subtle adjustments in letterforms to developing a completely custom typeface. Such an approach can contribute to a distinctive visual identity. A good example of this can be seen in the Apex logo, where the ‘A’ and ‘e’ are specifically adjusted.

Conclusion

We now know that choosing the right typeface for a logo goes beyond personal taste. It has a significant impact on how powerful and recognizable a brand becomes. In this article, we have seen that finding the perfect typeface is a challenge that requires both creativity and a practical approach. With a strong focus on three key aspects:

  • Font choice,
  • Font weight,
  • Letter spacing.

We have seen that finding the right typeface can be a quest, and personal preferences certainly play a role, but with the right tools, this process can be made much easier. The goal is to create a logo that is not only beautiful but also truly adds value by resonating with the people you want to reach and strengthening the brand’s key values.

We also looked at how trends can influence the longevity of your logo. It is important to be trendy, but it is equally important to remain true to timeless principles.

In summary,

Truly understanding both the technical details and the emotional impact of typefaces is enormously important for designing a logo. This knowledge helps to develop brands that not only look good but also have a deeper strategic impact — a strong brand.

And for those of you who are interested in diving deeper, I’ve tried to capture the fundamentals we’ve discussed in this article, focusing on good typeface choices, font weights, and letter spacing in a tool huisstijl. While it’s not perfect yet, I hope it can help some people create a simple brand identity that they love.

]]>
hello@smashingmagazine.com (Levi Honing)
<![CDATA[Regexes Got Good: The History And Future Of Regular Expressions In JavaScript]]> https://smashingmagazine.com/2024/08/history-future-regular-expressions-javascript/ https://smashingmagazine.com/2024/08/history-future-regular-expressions-javascript/ Tue, 20 Aug 2024 15:00:00 GMT Modern JavaScript regular expressions have come a long way compared to what you might be familiar with. Regexes can be an amazing tool for searching and replacing text, but they have a longstanding reputation (perhaps outdated, as I’ll show) for being difficult to write and understand.

This is especially true in JavaScript-land, where regexes languished for many years, comparatively underpowered compared to their more modern counterparts in PCRE, Perl, .NET, Java, Ruby, C++, and Python. Those days are over.

In this article, I’ll recount the history of improvements to JavaScript regexes (spoiler: ES2018 and ES2024 changed the game), show examples of modern regex features in action, introduce you to a lightweight JavaScript library that makes JavaScript stand alongside or surpass other modern regex flavors, and end with a preview of active proposals that will continue to improve regexes in future versions of JavaScript (with some of them already working in your browser today).

The History of Regular Expressions in JavaScript

ECMAScript 3, standardized in 1999, introduced Perl-inspired regular expressions to the JavaScript language. Although it got enough things right to make regexes pretty useful (and mostly compatible with other Perl-inspired flavors), there were some big omissions, even then. And while JavaScript waited 10 years for its next standardized version with ES5, other programming languages and regex implementations added useful new features that made their regexes more powerful and readable.

But that was then.

Did you know that nearly every new version of JavaScript has made at least minor improvements to regular expressions?

Let’s take a look at them.

Don’t worry if it’s hard to understand what some of the following features mean — we’ll look more closely at several of the key features afterward.

  • ES5 (2009) fixed unintuitive behavior by creating a new object every time regex literals are evaluated and allowed regex literals to use unescaped forward slashes within character classes (/[/]/).
  • ES6/ES2015 added two new regex flags: y (sticky), which made it easier to use regexes in parsers, and u (unicode), which added several significant Unicode-related improvements along with strict errors. It also added the RegExp.prototype.flags getter, support for subclassing RegExp, and the ability to copy a regex while changing its flags.
  • ES2018 was the edition that finally made JavaScript regexes pretty good. It added the s (dotAll) flag, lookbehind, named capture, and Unicode properties (via \p{...} and \P{...}, which require ES6’s flag u). All of these are extremely useful features, as we’ll see.
  • ES2020 added the string method matchAll, which we’ll also see more of shortly.
  • ES2022 added flag d (hasIndices), which provides start and end indices for matched substrings.
  • And finally, ES2024 added flag v (unicodeSets) as an upgrade to ES6’s flag u. The v flag adds a set of multicharacter “properties of strings” to \p{...}, multicharacter elements within character classes via \p{...} and \q{...}, nested character classes, set subtraction [A--B] and intersection [A&&B], and different escaping rules within character classes. It also fixed case-insensitive matching for Unicode properties within negated sets [^...].

As for whether you can safely use these features in your code today, the answer is yes! The latest of these features, flag v, is supported in Node.js 20 and 2023-era browsers. The rest are supported in 2021-era browsers or earlier.

Each edition from ES2019 to ES2023 also added additional Unicode properties that can be used via \p{...} and \P{...}. And to be a completionist, ES2021 added string method replaceAll — although, when given a regex, the only difference from ES3’s replace is that it throws if not using flag g.

Aside: What Makes a Regex Flavor Good?

With all of these changes, how do JavaScript regular expressions now stack up against other flavors? There are multiple ways to think about this, but here are a few key aspects:

  • Performance.
    This is an important aspect but probably not the main one since mature regex implementations are generally pretty fast. JavaScript is strong on regex performance (at least considering V8’s Irregexp engine, used by Node.js, Chromium-based browsers, and even Firefox; and JavaScriptCore, used by Safari), but it uses a backtracking engine that is missing any syntax for backtracking control — a major limitation that makes ReDoS vulnerability more common.
  • Support for advanced features that handle common or important use cases.
    Here, JavaScript stepped up its game with ES2018 and ES2024. JavaScript is now best in class for some features like lookbehind (with its infinite-length support) and Unicode properties (with multicharacter “properties of strings,” set subtraction and intersection, and script extensions). These features are either not supported or not as robust in many other flavors.
  • Ability to write readable and maintainable patterns.
    Here, native JavaScript has long been the worst of the major flavors since it lacks the x (“extended”) flag that allows insignificant whitespace and comments. Additionally, it lacks regex subroutines and subroutine definition groups (from PCRE and Perl), a powerful set of features that enable writing grammatical regexes that build up complex patterns via composition.

So, it’s a bit of a mixed bag.

JavaScript regexes have become exceptionally powerful, but they’re still missing key features that could make regexes safer, more readable, and more maintainable (all of which hold some people back from using this power).

The good news is that all of these holes can be filled by a JavaScript library, which we’ll see later in this article.

Using JavaScript’s Modern Regex Features

Let’s look at a few of the more useful modern regex features that you might be less familiar with. You should know in advance that this is a moderately advanced guide. If you’re relatively new to regex, here are some excellent tutorials you might want to start with:

Named Capture

Often, you want to do more than just check whether a regex matches — you want to extract substrings from the match and do something with them in your code. Named capturing groups allow you to do this in a way that makes your regexes and code more readable and self-documenting.

The following example matches a record with two date fields and captures the values:

const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03';
const re = /^Admitted: (?<admitted>\d{4}-\d{2}-\d{2})\nReleased: (?<released>\d{4}-\d{2}-\d{2})$/;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

Don’t worry — although this regex might be challenging to understand, later, we’ll look at a way to make it much more readable. The key things here are that named capturing groups use the syntax (?<name>...), and their results are stored on the groups object of matches.

You can also use named backreferences to rematch whatever a named capturing group matched via \k<name>, and you can use the values within search and replace as follows:

// Change 'FirstName LastName' to 'LastName, FirstName'
const name = 'Shaquille Oatmeal';
name.replace(/(?<first>\w+) (?<last>\w+)/, '$<last>, $<first>');
// → 'Oatmeal, Shaquille'

For advanced regexers who want to use named backreferences within a replacement callback function, the groups object is provided as the last argument. Here’s a fancy example:

function fahrenheitToCelsius(str) {
  const re = /(?<degrees>-?\d+(\.\d+)?)F\b/g;
  return str.replace(re, (...args) => {
    const groups = args.at(-1);
    return Math.round((groups.degrees - 32) * 5/9) + 'C';
  });
}
fahrenheitToCelsius('98.6F');
// → '37C'
fahrenheitToCelsius('May 9 high is 40F and low is 21F');
// → 'May 9 high is 4C and low is -6C'

Lookbehind

Lookbehind (introduced in ES2018) is the complement to lookahead, which has always been supported by JavaScript regexes. Lookahead and lookbehind are assertions (similar to ^ for the start of a string or \b for word boundaries) that don’t consume any characters as part of the match. Lookbehinds succeed or fail based on whether their subpattern can be found immediately before the current match position.

For example, the following regex uses a lookbehind (?<=...) to match the word “cat” (only the word “cat”) if it’s preceded by “fat ”:

const re = /(?<=fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'cat, fat pigeon, brat cat'

You can also use negative lookbehind — written as (?<!...) — to invert the assertion. That would make the regex match any instance of “cat” that’s not preceded by “fat ”.

const re = /(?<!fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'pigeon, fat cat, brat pigeon'

JavaScript’s implementation of lookbehind is one of the very best (matched only by .NET). Whereas other regex flavors have inconsistent and complex rules for when and whether they allow variable-length patterns inside lookbehind, JavaScript allows you to look behind for any subpattern.

The matchAll Method

JavaScript’s String.prototype.matchAll was added in ES2020 and makes it easier to operate on regex matches in a loop when you need extended match details. Although other solutions were possible before, matchAll is often easier, and it avoids gotchas, such as the need to guard against infinite loops when looping over the results of regexes that might return zero-length matches.

Since matchAll returns an iterator (rather than an array), it’s easy to use it in a for...of loop.

const re = /(?<char1>\w)(?<char2>\w)/g;
for (const match of str.matchAll(re)) {
  const {char1, char2} = match.groups;
  // Print each complete match and matched subpatterns
  console.log(Matched "${match[0]}" with "${char1}" and "${char2}");
}

Note: matchAll requires its regexes to use flag g (global). Also, as with other iterators, you can get all of its results as an array using Array.from or array spreading.

const matches = [...str.matchAll(/./g)];

Unicode Properties

Unicode properties (added in ES2018) give you powerful control over multilingual text, using the syntax \p{...} and its negated version \P{...}. There are hundreds of different properties you can match, which cover a wide variety of Unicode categories, scripts, script extensions, and binary properties.

Note: For more details, check out the documentation on MDN.

Unicode properties require using the flag u (unicode) or v (unicodeSets).

Flag v

Flag v (unicodeSets) was added in ES2024 and is an upgrade to flag u — you can’t use both at the same time. It’s a best practice to always use one of these flags to avoid silently introducing bugs via the default Unicode-unaware mode. The decision on which to use is fairly straightforward. If you’re okay with only supporting environments with flag v (Node.js 20 and 2023-era browsers), then use flag v; otherwise, use flag u.

Flag v adds support for several new regex features, with the coolest probably being set subtraction and intersection. This allows using A--B (within character classes) to match strings in A but not in B or using A&&B to match strings in both A and B. For example:

// Matches all Greek symbols except the letter 'π'
/[\p{Script_Extensions=Greek}--π]/v

// Matches only Greek letters
/[\p{Script_Extensions=Greek}&&\p{Letter}]/v

For more details about flag v, including its other new features, check out this explainer from the Google Chrome team.

A Word on Matching Emoji

Emoji are 🤩🔥😎👌, but how emoji get encoded in text is complicated. If you’re trying to match them with a regex, it’s important to be aware that a single emoji can be composed of one or many individual Unicode code points. Many people (and libraries!) who roll their own emoji regexes miss this point (or implement it poorly) and end up with bugs.

The following details for the emoji “👩🏻‍🏫” (Woman Teacher: Light Skin Tone) show just how complicated emoji can be:

// Code unit length
'👩🏻‍🏫'.length;
// → 7
// Each astral code point (above \uFFFF) is divided into high and low surrogates

// Code point length
[...'👩🏻‍🏫'].length;
// → 4
// These four code points are: \u{1F469} \u{1F3FB} \u{200D} \u{1F3EB}
// \u{1F469} combined with \u{1F3FB} is '👩🏻'
// \u{200D} is a Zero-Width Joiner
// \u{1F3EB} is '🏫'

// Grapheme cluster length (user-perceived characters)
[...new Intl.Segmenter().segment('👩🏻‍🏫')].length;
// → 1

Fortunately, JavaScript added an easy way to match any individual, complete emoji via \p{RGI_Emoji}. Since this is a fancy “property of strings” that can match more than one code point at a time, it requires ES2024’s flag v.

If you want to match emojis in environments without v support, check out the excellent libraries emoji-regex and emoji-regex-xs.

Making Your Regexes More Readable, Maintainable, and Resilient

Despite the improvements to regex features over the years, native JavaScript regexes of sufficient complexity can still be outrageously hard to read and maintain.

Regular Expressions are SO EASY!!!! pic.twitter.com/q4GSpbJRbZ

— Garabato Kid (@garabatokid) July 5, 2019


ES2018’s named capture was a great addition that made regexes more self-documenting, and ES6’s String.raw tag allows you to avoid escaping all your backslashes when using the RegExp constructor. But for the most part, that’s it in terms of readability.

However, there’s a lightweight and high-performance JavaScript library named regex (by yours truly) that makes regexes dramatically more readable. It does this by adding key missing features from Perl-Compatible Regular Expressions (PCRE) and outputting native JavaScript regexes. You can also use it as a Babel plugin, which means that regex calls are transpiled at build time, so you get a better developer experience without users paying any runtime cost.

PCRE is a popular C library used by PHP for its regex support, and it’s available in countless other programming languages and tools.

Let’s briefly look at some of the ways the regex library, which provides a template tag named regex, can help you write complex regexes that are actually understandable and maintainable by mortals. Note that all of the new syntax described below works identically in PCRE.

Insignificant Whitespace and Comments

By default, regex allows you to freely add whitespace and line comments (starting with #) to your regexes for readability.

import {regex} from 'regex';
const date = regex`
  # Match a date in YYYY-MM-DD format
  (?<year>  \d{4}) - # Year part
  (?<month> \d{2}) - # Month part
  (?<day>   \d{2})   # Day part
`;

This is equivalent to using PCRE’s xx flag.

Subroutines and Subroutine Definition Groups

Subroutines are written as \g<name> (where name refers to a named group), and they treat the referenced group as an independent subpattern that they try to match at the current position. This enables subpattern composition and reuse, which improves readability and maintainability.

For example, the following regex matches an IPv4 address such as “192.168.12.123”:

import {regex} from 'regex';
const ipv4 = regex`\b
  (?<byte> 25[0-5] | 2[0-4]\d | 1\d\d | [1-9]?\d)
  # Match the remaining 3 dot-separated bytes
  (\. \g<byte>){3}
\b`;

You can take this even further by defining subpatterns for use by reference only via subroutine definition groups. Here’s an example that improves the regex for admittance records that we saw earlier in this article:

const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03';
const re = regex`
  ^ Admitted:\ (?<admitted> \g<date>) \n
    Released:\ (?<released> \g<date>) $

  (?(DEFINE)
    (?<date>  \g<year>-\g<month>-\g<day>)
    (?<year>  \d{4})
    (?<month> \d{2})
    (?<day>   \d{2})
  )
`;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

A Modern Regex Baseline

regex includes the v flag by default, so you never forget to turn it on. And in environments without native v, it automatically switches to flag u while applying v’s escaping rules, so your regexes are forward and backward-compatible.

It also implicitly enables the emulated flags x (insignificant whitespace and comments) and n (“named capture only” mode) by default, so you don’t have to continually opt into their superior modes. And since it’s a raw string template tag, you don’t have to escape your backslashes \\\\ like with the RegExp constructor.

Atomic Groups and Possessive Quantifiers Can Prevent Catastrophic Backtracking

Atomic groups and possessive quantifiers are another powerful set of features added by the regex library. Although they’re primarily about performance and resilience against catastrophic backtracking (also known as ReDoS or “regular expression denial of service,” a serious issue where certain regexes can take forever when searching particular, not-quite-matching strings), they can also help with readability by allowing you to write simpler patterns.

Note: You can learn more in the regex documentation.

What’s Next? Upcoming JavaScript Regex Improvements

There are a variety of active proposals for improving regexes in JavaScript. Below, we’ll look at the three that are well on their way to being included in future editions of the language.

Duplicate Named Capturing Groups

This is a Stage 3 (nearly finalized) proposal. Even better is that, as of recently, it works in all major browsers.

When named capturing was first introduced, it required that all (?<name>...) captures use unique names. However, there are cases when you have multiple alternate paths through a regex, and it would simplify your code to reuse the same group names in each alternative.

For example:

/(?<year>\d{4})-\d\d|\d\d-(?<year>\d{4})/

This proposal enables exactly this, preventing a “duplicate capture group name” error with this example. Note that names must still be unique within each alternative path.

Pattern Modifiers (aka Flag Groups)

This is another Stage 3 proposal. It’s already supported in Chrome/Edge 125 and Opera 111, and it’s coming soon for Firefox. No word yet on Safari.

Pattern modifiers use (?ims:...), (?-ims:...), or (?im-s:...) to turn the flags i, m, and s on or off for only certain parts of a regex.

For example:

/hello-(?i:world)/
// Matches 'hello-WORLD' but not 'HELLO-WORLD'

Escape Regex Special Characters with RegExp.escape

This proposal recently reached Stage 3 and has been a long time coming. It isn’t yet supported in any major browsers. The proposal does what it says on the tin, providing the function RegExp.escape(str), which returns the string with all regex special characters escaped so you can match them literally.

If you need this functionality today, the most widely-used package (with more than 500 million monthly npm downloads) is escape-string-regexp, an ultra-lightweight, single-purpose utility that does minimal escaping. That’s great for most cases, but if you need assurance that your escaped string can safely be used at any arbitrary position within a regex, escape-string-regexp recommends the regex library that we’ve already looked at in this article. The regex library uses interpolation to escape embedded strings in a context-aware way.

Conclusion

So there you have it: the past, present, and future of JavaScript regular expressions.

If you want to journey even deeper into the lands of regex, check out Awesome Regex for a list of the best regex testers, tutorials, libraries, and other resources. And for a fun regex crossword puzzle, try your hand at regexle.

May your parsing be prosperous and your regexes be readable.

]]>
hello@smashingmagazine.com (Steven Levithan)
<![CDATA[Pricing Projects As A Freelancer Or Agency Owner]]> https://smashingmagazine.com/2024/08/pricing-projects-freelancer-agency-owner/ https://smashingmagazine.com/2024/08/pricing-projects-freelancer-agency-owner/ Fri, 16 Aug 2024 13:00:00 GMT Pricing projects can be one of the most challenging aspects of running a digital agency or working as a freelance web designer. It’s a topic that comes up frequently in discussions with fellow professionals in my Agency Academy.

Three Approaches to Pricing

Over my years in the industry, I’ve found that there are essentially three main approaches to pricing:

  • Fixed price,
  • Time and materials,
  • And value-based pricing.

Each has its merits and drawbacks, and understanding these can help you make better decisions for your business. Let’s explore each of these in detail and then dive into what I believe is the most effective strategy.

Fixed Price

Fixed pricing is often favored by clients because it reduces their risk and allows for easier comparison between competing proposals. On the surface, it seems straightforward: you quote a price, the client agrees, and you deliver the project for that amount. However, this approach comes with significant drawbacks for agencies and freelancers:

  • Estimating accurately is incredibly challenging.
    In the early stages of a project, we often don’t have enough information to provide a precise quote. Clients may not have a clear idea of their requirements, or there might be technical complexities that only become apparent once work begins. This lack of clarity can lead to underquoting, which eats into your profits, or overquoting, which might cost you the job.
  • There’s no room for adaptation based on testing or insights gained during the project.
    Web design and development is an iterative process. As we build and test, we often discover better ways to implement features or uncover user needs that weren’t initially apparent. With a fixed price model, these improvements are often seen as “scope creep” and can lead to difficult conversations with clients about additional costs.
  • The focus shifts from delivering the best possible product to sticking within the agreed-upon scope.
    This can result in missed opportunities for innovation and improvement, ultimately leading to a less satisfactory end product for the client.

While fixed pricing might seem straightforward, it’s not without its complications. The rigidity of this model can stifle creativity and adaptability, two crucial elements in successful web projects. So, let’s look at an alternative approach that offers more flexibility.

Time and Materials

Time and materials (T&M) pricing offers a fairer system where the client only pays for the hours actually worked. This approach has several advantages:

  • Allows for greater adaptability as the project progresses. If new requirements emerge or if certain tasks take longer than expected, you can simply bill for the additional time. This flexibility can lead to better outcomes as you’re not constrained by an initial estimate.
  • Encourages transparency and open communication. Clients can see exactly what they’re paying for, which can foster trust and understanding of the work involved.
  • Reduces the risk of underquoting. You don’t have to worry about eating into your profits if a task takes longer than expected.

However, T&M pricing isn’t without its drawbacks:

  • It carries a higher perceived risk for the client, as the final cost isn’t determined upfront. This can make budgeting difficult for clients and may cause anxiety about runaway costs.
  • It requires careful tracking and regular communication about hours spent. Without this, clients may be surprised by the final bill, leading to disputes.
  • Some clients may feel it incentivizes inefficiency, as taking longer on tasks results in higher bills.

T&M pricing can work well in many scenarios, especially for long-term or complex projects where requirements may evolve. However, it’s not always the perfect solution, particularly for clients with strict budgets or those who prefer more certainty. There’s one more pricing model that’s often discussed in the industry, which attempts to tie pricing directly to results.

Value-Based Pricing

Value-based pricing is often touted as the holy grail of pricing strategies. The idea is to base your price on the value your work will generate for the client rather than on the time it takes or a fixed estimate. While this sounds great in theory, it’s rarely a realistic approach in our industry. Here’s why:

  • It’s only suitable for projects where you can tie your efforts directly to ROI (Return on Investment). For example, if you’re redesigning an e-commerce site, you might be able to link your work to increased sales. However, for many web projects, the value is more intangible or indirect.
  • Accurately calculating ROI is often difficult or impossible in web design and development. Many factors contribute to a website’s success, and isolating the impact of design or development work can be challenging.
  • It requires a deep understanding of the client’s business and industry. Without this, it’s hard to accurately assess the potential value of your work.
  • Clients may be reluctant to share the financial information necessary to make value-based pricing work. They might see it as sensitive data or simply may not have accurate projections.
  • It can lead to difficult conversations if the projected value isn’t realized. Was it due to your work or other factors beyond your control?

While these three approaches form the foundation of most pricing strategies, the reality of pricing projects is often more nuanced and complex. In fact, as I point out in my article “How To Work Out What To Charge Clients: The Honest Version”, pricing often involves a mix of educated guesswork, personal interest in the project, and an assessment of what the market will bear.

Given the challenges with each of these pricing models, you might be wondering if there’s a better way. In fact, there is, and it starts with a different approach to the initial client conversation.

Start by Discussing Appetite

Instead of jumping straight into deliverables or hourly rates, I’ve found it more effective to start by discussing what 37signals calls “appetite” in their book Shaping Up. Appetite is how much the product owner is willing to invest based on the expected return for their business. This concept shifts the conversation from “What will this cost?” to “What is this worth to you?”

This approach is beneficial for several reasons:

  • Focuses on the budget rather than trying to nail down every deliverable upfront. This allows for more flexibility in how that budget is allocated as the project progresses.
  • Allows you to tailor your proposal to what the client can actually afford. There’s no point in proposing a $100,000 solution if the client only has $20,000 to spend.
  • Helps set realistic expectations from the start. If a client’s appetite doesn’t align with what’s required to meet their goals, you can have that conversation early before investing time in detailed proposals.
  • Shifts the conversation from price comparison to value delivery. Instead of competing solely on price, you’re discussing how to maximize the value of the client’s investment.
  • Mirrors how real estate agents work — they ask for your budget to determine what kind of properties to show you. This analogy can help clients understand why discussing budgets early is crucial.

To introduce this concept to clients, I often use the real estate analogy. I explain that even if you describe your ideal house (e.g., 3 bedrooms, specific location), a real estate agent still cannot give you a price because it depends on many other factors, including the state of repair and nearby facilities that may impact value. Similarly, in web design and development, many factors beyond the basic requirements affect the final cost and value of a project.

Once you’ve established the client’s appetite, you’re in a much better position to structure your pricing. But how exactly should you do that? Let me share a strategy that’s worked well for me and many others in my Agency Academy.

Improve Your Estimates With Sub-Projects

Here’s an approach I’ve found highly effective:

  1. Take approximately 10% of the total budget for a discovery phase. This can be a separate contract with a fixed price. During this phase, you dig deep into the client’s needs, goals, and constraints. You might conduct user research, analyze competitors, and start mapping out the project’s architecture.
  2. Use the discovery phase to define what needs to be prototyped, allowing you to produce a fixed price for the prototyping sub-project. This phase might involve creating wireframes, mockups, or even a basic working prototype of key features.
  3. Test and evolve the prototype, using it as a functional specification for the build. This detailed specification allows you to quote the build accurately. By this point, you have a much clearer picture of what needs to be built, reducing the risk of unexpected complications.

This approach combines elements of fixed pricing (for each sub-project) with the flexibility to adapt between phases. It allows you to provide more accurate estimates while still maintaining the ability to pivot based on what you learn along the way.

Advantages of the Sub-Project Approach

This method offers several key benefits:

  • Clients appreciate the sense of control over the budget. They can decide after each phase whether to continue, giving them clear exit points if needed.
  • It reduces the perceived risk for clients, as they could theoretically change suppliers between sub-projects. This makes you a less risky option compared to agencies asking for a commitment to the entire project upfront.
  • Each sub-project is easier to price accurately. As you progress, you gain more information, allowing for increasingly precise estimates.
  • It allows for adaptability between sub-projects, eliminating the problem of scope creep. If new requirements emerge during one phase, they can be incorporated into the planning and pricing of the next phase.
  • It encourages ongoing communication and collaboration with the client. Regular check-ins and approvals are built into the process.
  • It aligns with agile methodologies, allowing for iterative development and continuous improvement.

This sub-project approach not only helps with more accurate pricing but also addresses one of the most common challenges in project management: scope creep. By breaking the project into phases, you create natural points for reassessment and adjustment. For a more detailed look at how this approach can help manage scope creep, check out my article “How To Price Projects And Manage Scope Creep.”

This approach sounds great in theory, but you might be wondering how clients typically react to it. Let’s address some common objections and how to handle them.

Dealing with Client Objections

You may encounter resistance to this approach, especially in formal bid processes where clients are used to receiving comprehensive fixed-price quotes. Here’s how to handle common objections:

“We need a fixed price for the entire project.”

Provide an overall estimate based on their initial scope, but emphasize that this is a rough figure. Use your sub-project process as a selling point, explaining how it actually provides more accurate pricing and better results. Highlight how inaccurate other agency quotes are likely to be and warn about potential scope discussions later.

“This seems more complicated than other proposals we've received.”

Acknowledge that it may seem more complex initially, but explain how this approach actually simplifies the process in the long run. Emphasize that it reduces risk and increases the likelihood of a successful outcome.

“We don't have time for all these phases.”

Explain that while it may seem like more steps, this approach often leads to faster overall delivery because it reduces rework and ensures everyone is aligned at each stage.

“How do we compare your proposal to others if you’re not giving us a fixed price?”

Emphasize that the quality and implementation of what agencies quote for can vary wildly. Your approach ensures they get exactly what they need, not just what they think they want at the outset. Encourage them to consider the long-term value and reduced risk, not just the initial price tag.

“We’re not comfortable discussing our budget upfront.”

Use the real estate analogy to explain why discussing the budget upfront is crucial. Just as a real estate agent needs to know your budget to show you appropriate properties, you need to understand their investment appetite to propose suitable solutions.

By adopting this approach to pricing, you can create a more collaborative relationship with your clients, reduce the risk for both parties, and ultimately deliver better results.

Remember,

Pricing isn’t just about numbers — it’s about setting the foundation for a successful project and a positive client relationship.

By being transparent about your process and focusing on delivering value within the client’s budget, you’ll set yourself apart in a crowded market.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[How To Defend Your Design Process]]> https://smashingmagazine.com/2024/08/how-defend-your-design-process/ https://smashingmagazine.com/2024/08/how-defend-your-design-process/ Thu, 15 Aug 2024 10:00:00 GMT Maybe you’ve been there before: You’re in the middle of the design process, and stakeholders expect you to deliver faster. How do you best manage a situation like this? How do you communicate the design process and defend it when stakeholders think the design is taking too long?

Let’s take a closer look at what you can do to clarify false expectations and prevent them in the first place.

This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up this year. Free preview.

Why Stakeholders Ask For Quicker Turnaround

🤔 Polished deliverables hide the process.

When you show someone a polished, final design, they probably won’t see the complexity of the work behind it unless they are a designer themself, of course. That’s the knowledge gap that lies between designers and stakeholders and one of the reasons why stakeholders might make false assumptions about how long the design will take.

🤔 Polished deliverables suggest a fast production time.

Not familiar with the design process, stakeholders often track value in UX deliverables in an attempt to “measure design” and the progress made. And that can lead to a dilemma: When more final, polished deliverables arrive quickly, stakeholders also assume a faster production time. The real value of design, however, lies in the quality of the process behind it.

How To Get Support From Stakeholders

Design is all about well-orchestrated feedback loops. For different audiences, from customers and designers to developers and stakeholders. Cutting corners breaks these feedback loops. The result is poor inputs that lead to poor outcomes — often reversible but sometimes damaging for years to come.

Protecting the design process isn’t only in the interest of the designers but, most importantly, in the interest of the user and the business. So, how can we advocate for it?

✅ Highlight user value.

One mistake to avoid is to present deliverables as “finished.” Emphasize that you are still testing and highlight that the design process is a way to maximize user value and that business value comes from user value, not the other way around. No productivity optimization can automate user value, and there is no “later” phase to patch broken design work.

✅ Ask for uninterrupted time.

To get the time you and your team need to design, it might be an option to suggest uninterrupted times for heads-down design work. You could also suggest shifting priorities or reducing the overall scope.

✅ Be sincere about the process.

Also, remember to calibrate expectations: You don’t know how your stakeholders work, so you shouldn’t expect that they know and understand the design process. The more sincere you are about what’s needed to be ready, the more likely you are to get understanding and support, rather than fast turnaround requests.

✅ Visualize progress.

As designers, we often get defensive, not showing the work until we feel that it’s in good shape. Personally, I found it remarkably helpful to show the design progress to stakeholders early and repeatedly instead. Not to ask for a personal opinion on the design but if they think it actually helps deliver user value.

A great technique to visualize the complexity of UX work is event storming — we’ll take a closer look at how it works in a second. To keep stakeholders on top of things, it might also be a good idea to report progress proactively. So why not opt for a short, two-minute video update once a week?

Exercise: Event Storming

The most impactful way to be transparent about the process and explain why design takes time is to visualize it. Not as abstract Double-Diamond or Triple-Diamond diagrams, but as messy, real-world sticky notes on a huge Miro or FigJam board — with all the pieces of work needed to get to final deliverables.

How To Run An Event Storming Session

Basically, we bring everyone involved in the project on board for a 2-hour session. We set orange sticky notes as events required for the success of the project on a timeline. Then, we cluster these events and break them across lanes, with everything from user testing and stakeholder approval to research tasks and design initiatives.

The resulting timeline visualizes the process and acts as a reference for the work to be done, or the work completed. Sometimes, we add multiple lanes to map the work across different UX activities, e.g., UX research, UX metrics, and so on. Your timeline might also include any other teams and domains that are relevant to the work — think technical details, risks, stakeholder engagement, user testing, and others.

The Value Of Event Storming

To me, event storming creates a much more honest and real visualization of the design process compared to any diamond diagrams that we often use. It’s messy, it’s complex, it’s difficult, and it matches the complexity of real life. Plus, it’s customized to the needs of a specific project, with people who must be involved for successful delivery.

I can’t emphasize enough just how incredibly impactful this little exercise can be to create a shared understanding about what we are doing, how we are doing it, and what is required from all teams for a successful delivery. I hope it will help you defend your process the next time stakeholders ask for a quicker turnaround. 🙌🏽

Further Reading Meet Smart Interface Design Patterns

You can find more details on design patterns and UX strategy in Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[If I Was Starting My Career Today: Thoughts After 15 Years Spent In UX Design (Part 2)]]> https://smashingmagazine.com/2024/08/thoughts-after-15-years-spent-ux-design-part2/ https://smashingmagazine.com/2024/08/thoughts-after-15-years-spent-ux-design-part2/ Fri, 09 Aug 2024 11:00:00 GMT In the previous article in my two-part series, I have explained how important it is to start by mastering your design tools, to work on your portfolio (even if you have very little work experience — which is to be expected at this stage), and to carefully prepare for your first design interviews.

If all goes according to plan, and with a little bit of luck, you’ll land your first junior UX job — and then, of course, you’ll be facing more challenges, about which I am about to speak in this second article in my two-part article series.

In Your New Junior UX Job: On the Way to Grow

You have probably heard of the Pareto Rule, which states that 20% of actions provide 80% of the results.

“The Pareto Principle is a concept that specifies that 80% of consequences come from 20% of the causes, asserting an unequal relationship between inputs and outputs. The principle was named after the economist Vilfredo Pareto.”

— “The Pareto Principle, a.k.a. the Pareto Rule

This means that some of your actions will help you grow much faster than others.

But before we go into the details, let’s briefly consider the junior UX designer path. I think it’s clear that, at first, juniors usually assist other designers with simple but time-consuming tasks. Then, the level of complexity and your responsibilities start increasing, depending on your performance.

So, you got your first design job? Great! Here are a few things you can focus on if you want to be growing at a faster pace.

Chase For Challenges

The simple but slow way to go is to do your work and then wait until your superiors notice how good you are and start giving you more complex tasks. The problem is that people focus on themselves too much.

So, to “cut some corners,” you need to actively look for challenges. It’s scary, I know, but remember, people who invented any new groundbreaking UX approach or a new framework you see in books and manuals now used their intuition first. You have the whole World Wide Web full of articles and lectures about that. So, define the skill you want to develop, spend a day reading about this topic, find a real problem, and practice. Then, share what you did and get some feedback. After a few iterations, I bet you will be assigned the first real task for your practice!

Use Interfaces Consciously

Take the time to look again at the screenshot of the Amazon website (from Part One):

User interfaces didn’t appear in their present form right from the start. Instead, they evolved to their current state over the span of many years. And you all were part of their evolution, albeit passively — you registered on different websites, reset your passwords quite a few times, clicked onboarding screens, filled out short and long web forms, used search, and so on.

In your design work, all tasks (or 99% of them, at least at the beginning) will be based on those UX patterns. You don’t need to reinvent the bicycle; you only need to remember what you already know and pay attention to the details while using the interfaces of the apps on your smartphone and on your computer. Ask yourself:

  • Why was this designed this way?
  • What is not clear enough for me as a user?
  • What is thought out well and what is not?

All of today’s great design solutions were built based on common sense and then documented so that other people can learn how to re-use this knowledge. Develop your own “common sense” skill every day by being a careful observer and by living your life consciously. Notice the patterns of good design, try to understand and memorize them, and then implement and rethink them in your own work.

I can also highly recommend the Smart Interface Design Patterns course with Vitaly Friedman. It provides guidelines and best practices for common components in modern interfaces. Inventing a new solution for every problem takes time, and too often, it’s just unnecessary. Instead, we can rely on bulletproof design patterns to avoid issues down the line. This course helps with just that. In the course, you will study hundreds of hand-picked examples, from complex navigation to filters, tables, and forms, and you will work on actual real-life challenges.

Learn How to Present Your Work

The ability to convey complex thoughts and ideas in the form of clear sentences defines how effectively you will be able to interact with other people.

This is a core work skill — a skill that you’ll be actually using your whole life, and not only in your work. I have written about this topic in much detail previously:

“Good communication is about sharing your ideas as clearly as possible.”

— “Effective Communication For Everyday Meetings” (Smashing Magazine)

In my article, I have described all the general principles that apply to effective communication, with the most important being: to develop a skill, you need to practice.

As a quick exercise, try telling your friends about the work you do and not to be boring while explaining the details. You will feel that you are on the right track if they do not try to change the topic and instead ask you additional questions!

Gather Feedback

Don’t wait for your yearly review to hear about what you were doing right and wrong. Ask people for feedback and suggestions, and ask them often.

To help them start, first, tell them about your weak side and ask them to tell you their own impressions. Try encouraging them to expand their input and ask for recommendations on how you could fix your weaknesses. Don’t forget to tell them when you are trying to apply their suggestions in practice. After all, these people helped you become better, so be thankful.

Learn Business

I see a lot of designers trying to apply all of their experience to every project, and they often complain that it doesn’t work — customers refuse to follow the entire classical UX process, such as defining User Personas, creating the Information Architecture (IA), outlining the customer journey map, and so on. Sometimes, it happens because clients don’t have the time and budget for it, or they don’t see the value because the designer can’t explain it in a proper way.

But remember that many great products were built without using all of today’s available and tested UX approaches &mdahs; this doesn’t mean those approaches are useless. But initially, there was only common sense and many attempts to get better results, and only then did someone describe something as a working approach and specify all the details. So, before trying to apply any of these UX techniques, think about what you need to achieve. Is there any other way to get there within your time and budget?

Learn how the business works. Talk to customers in business language and communicate the value you create and not the specific approach, framework, or tool that you’ll be using.

“Good UX design is where value comes into the picture. We add value when we transform a product or service from delivering a poor experience to providing a good experience.”

— “The Value of Great UX,” by Jared Spool
Learn How to Make Interfaces Nice-looking

Yes, user experience should be first, but let’s be honest — we also love nice things! The same goes for your customers; they can’t always see the UX part of your work but can always say whether the interface is good-looking. So, learn the composition and color theory, use elegant illustrations and icons, learn typography, and always strive to make your work visually appealing. Some would say that it’s not so important, but trust me, it is.

As an exercise, try to copy the design of a few beautifully looking interfaces. Take a look at an interface screen, then close it and try to make a copy of it from memory. When you are done, compare the two and then make a few more adjustments in order to have as close a copy of the interface as possible. Try to understand why the original was built the way it is. I bet this process of reproducing an interface will help you understand many things you haven’t been noticing before.

Save the People’s Time and Efforts

Prepare to get some new tasks in advance. Create a list of questions, and don’t forget to ask about the deadlines. Align your plan and the number of iterations so people know precisely what and when to expect from you. Be curious (but not annoying) by asking or sending questions every few hours (but try to first search for the answers online). Even if you don’t find the exact answer, it’ll help you formulate the right questions better and get a better view of the “big picture.” Remember, one day, you will get a task directly from the customer, so fetching the data you need to complete tasks correctly is an excellent skill to develop.

Structurize Your Knowledge and Create a Learning Plan

When you are just beginning to learn, too many articles about UX design will look like absolute “must-reads” to you. But you will drown in the information if you try to read them all in no particular order. Better, instead of just trying to read everything, try first to find a mentor who will help you build a learning plan and will advise you along the way.

Another good way to start is to complete a solid UX online course. If you can’t, take the learning program of any popular UX course out there and research the topics from the course’s list one by one. Also, you can use such a structured list (going from easier to more complex UX topics) for filtering articles you are going to read.

There are many excellent courses out there, and here are a few suggestions:
  • Selection of free UX design courses, including those offering certifications,” by Cheshta Dua
    In this article, the author shares a few free UX design courses which helped her get started as a UX designer.
  • Best free UX design courses — 2024,” by Cynthia Vinney (UX Design Institute)
    This is a comparison of a few free UX design courses, both online and in-person.
  • The 10 Best Free UX Design Courses in 2024,” by Rachel Meltze (CareerFoundry)
    A selection of free UX design courses — using these you can learn the fundamentals of UX design, the tools designers use, and more about the UX design career path.
  • The HTML/CSS Basics (.dev),” by Geoff Graham
    The Basics is an excellent online course that teaches the basic principles of front-end development. It’s a good “entry point” for those just coming into front-end development or perhaps for someone with experience writing code from years ago who wants to jump into modern-day development.
Practice, Practice, Practice

Bruce Lee once said:

“I fear not the man who has practiced 10,000 kicks once, but the man who has practiced one kick 10,000 times.”

— Bruce Lee

You may have read a lot about some new revolutionary UX approaches, but only practicing allows you to convert this knowledge into a skill. Our brain continually works to clear out unnecessary information from our memory. Therefore, actively practicing the ideas and knowledge that you have learned is the only way to signal to your brain that this knowledge is essential to be retained and re-used.

On a related note, you will likely remember also the popular “10,000-hour rule,” which was popularized by Malcolm Gladwell’s bestseller book Outliers).

As Malcolm says, the rule goes like this: it takes 10,000 hours of intensive practice to achieve mastery of complex skills and materials, like playing the violin or getting as good as Bill Gates at computer programming. Turns out, practice is important, and it’s surprising how much time and effort it may take to master something complicated. But later research also suggests that someone could practice for thousands of hours and still not be a master performer. They could be outperformed by someone who practiced less but had a teacher who showed them just what to focus on at a key moment in their practice.

So, remember my advice from the previous section? Try to find a mentor because, as I said earlier, learning and practicing with a mentor and a good plan will often lead to better results.

Conclusion

Instead of a conclusion (or trying to give you the answer to the ultimate question of life, the universe, and everything), only a few final words of advice.

Remember, there doesn’t exist a single correct way to do things because there are no absolute criteria to define “things done properly.” You can apply all your knowledge and required steps in the classical design process, and the product may fail.

At the same time, someone could quickly develop a minimum viable product (MVP) without using all of the standard design phases — and still conquer the market. Don’t believe me?

The first Apple iPhone, introduced 17 years ago, didn’t have even a basic copy/paste feature yet we all know how the iPhone conquered the world (and it’s not only the iPhone, there are many other successful MVP examples out there, often conceived by small startups). It’s because Apple engineers and designers got the core product design concept right; they could release a product that didn’t yet have everything in it.

So yes, you need to read a lot about UX and UI design, watch tutorials, learn the design theory, try different approaches, speak to the people using your product (or the first alpha or beta version of it), and practice. But in the end, always ask yourself, “Is this the most efficient way to bring value to people and get the needed results?” If the answer is “No,” update your design plan. Because things are not happening by themselves. Instead, we, humans, make things happen.

You are the pilot of your plane, so don’t expect someone else to care about your success more than you. Do your best. Make corrections and iterate. Learn, learn, learn. And sooner or later, you’ll reach success!

Further Reading

A Selection Of Design Resources (Part One, Part Two)

  • Photoshop CS Down & Dirty Tricks, a book by Scott Kelby
    Bestselling author Scott Kelby shares an amazing collection of Photoshop tricks, including how to create the same exact effects you see every day in magazines, at the movies, on the Web, and more. These are real-world techniques, the same ones you see used by leading Photoshop photographers, designers, and special effect masters.
  • Why Designers Aren’t Understood,” by Vitaly Friedman (Smashing Magazine)
    How do we conduct UX research when there is no or only limited access to users? Here are some workarounds to run UX research or make a strong case for it. (This article is an upcoming part of the “Smart Interface Design Patterns.” — Editor’s Note)
  • UXchallenge,” by Yachin You
    This website will help you learn how to solve real problems that customers face and present case studies that are related to these problems.
  • Kano analysis: The Kano model explained(Qualtrics)
    Kano analysis (also known as the “Customer Delight vs. Implementation Investment” approach) is a tool that helps you enhance your products and services based on customer emotions. This guide will help you understand what is Kano analysis and how you can use it in practice.
  • Kano Model: What It Is & How to Use It to Increase Customer Satisfaction(Userpilot)
    The Kano model uses quick and powerful data analysis to design your product roadmap. In this article, you will learn a brief history of the Kano model, a practical explanation of how it works, five categories of potential customer reactions to new features, and a four-step process for effective Kano analysis.
  • The Pareto Principle(Investopedia)
    The Pareto Principle is a concept that specifies that 80% of consequences come from 20% of the causes, asserting an unequal relationship between inputs and outputs. Named after the economist Vilfredo Pareto, this principle serves as a general reminder that the relationship between inputs and outputs is not balanced. The Pareto Principle is also known as the Pareto Rule or the 80/20 Rule.
  • Figma Portfolio Templates & Examples(UX Crush)
    A curated selection of portfolio templates for Figma Design.
  • How to Define a User Persona,” by Raven Veal (CareerFoundry)
    As you break into a career in UX, user personas are one tool you’ll certainly want to have available as you gather user research and find design solutions to solve problems and create more human-friendly products and experiences.
  • How to design a customer journey map,” by Emily Stevens (UX Design Institute)
    A customer journey map is a visual representation of how a user interacts with your product. This detailed guide will teach you how to create such a customer journey map.
  • “Building Components For Consumption, Not Complexity” (Part 1, Part 2),” by Luis Ouriach (Smashing Magazine)
    In this two-part series of articles, Luis shares his experience with design systems and how you can overcome the potential pitfalls, starting from how to make designers on your team adopt the complex and well-built system that you created to what are the best naming conventions and how to handle the auto-layout of components, indexing/search, and more.
  • Effective Communication For Everyday Meetings,” by Andrii Zhdan (Smashing Magazine)
    Before any meeting starts, we often have many ideas about what to say and how it should go. But when the meeting happens, reality may “crash” all of our plans. This article is about conducting productive meetings. The author will give you a step-by-step guide on preparing a solid meeting structure that will let you follow the original plan and reach the meeting goals.
  • The Value of Great UX,” by Jared Spool
    This crossover from poor UX design to good UX design is where value comes into the picture. We add value when we transform a product or service from delivering a poor experience to providing a good experience.
  • How Designers Should Ask For (And Receive) High-Quality Feedback,” by Andy Budd (Smashing Magazine)
    Designers often complain about the quality of feedback they get from senior stakeholders. In this article, Andy Budd shares a better way of requesting feedback: rather than sharing a linear case study that explains every design revision, the first thing to do would be to better frame the problem.
  • Designing A Better Design Handoff File In Figma,” by Ben Shih (Smashing Magazine)
    Practical tips to enhance the handoff process between design and development in product development, with provided guidelines for effective communication, documentation, design details, version control, and plugin usage.
  • The HTML/CSS Basics (.dev),” by Geoff Graham
    The Basics is an online course that teaches the basic principles of front-end development, focusing specifically on HTML and CSS. A good “entry point” for those just coming into front-end development and perhaps for someone with experience writing code years ago who wants to jump into modern-day development.
  • Selection of free UX design courses, including those offering certifications,” by Cheshta Dua
    In this article, the author shares a few free UX design courses that helped her get started as a UX designer.
  • Best free UX design courses — 2024,” by Cynthia Vinney (UX Design Institute)
    Check this comparison of several free UX design courses currently on the market, both online and in-person.
  • The 10 Best Free UX Design Courses in 2024,” by Rachel Meltze (CareerFoundry)
    A selection of free UX design courses where you can learn the fundamentals of UX design, the tools designers use, and the UX design career path. This guide provides a range of courses, from micro-tutorials to full-featured UI/UX courses.
  • Researcher Behind ‘10,000-Hour Rule’ Says Good Teaching Matters, Not Just Practice,” by Jeffrey Young (EdSurge Magazine)
    It takes 10,000 hours of intensive practice to achieve mastery of complex skills and materials, like playing the violin or getting as good as Bill Gates at computer programming. Turns out, a study also shows that there’s another important variable that Gladwell originally didn’t focus on: how good a student’s teacher is.
  • An Apple engineer details why the first iPhone didn’t have copy and paste,” by Filipe Espósito (9to5Mac)
    Apple introduced the first iPhone 17 years ago, and a lot has changed since then, but it’s hard to believe that long ago, the iPhone didn’t even have copy-and-paste options. Now, former Apple software engineer Ken Kocienda has revealed details about why the first iPhone didn’t have such features.
  • Fifteen examples of successful MVPs,” Ross Krawczyk (RST Software)
    Startups need to get their products to the market faster than ever in an increasingly competitive world. The minimum viable product is the way to achieve this, but you must be really able to provide the right key features that give value to a wide customer base in order to attract clients and investors on time.
]]>
hello@smashingmagazine.com (Andrii Zhdan)
<![CDATA[Best Of Pro Scheduler Libraries]]> https://smashingmagazine.com/2024/08/best-pro-scheduler-libraries/ https://smashingmagazine.com/2024/08/best-pro-scheduler-libraries/ Thu, 08 Aug 2024 20:00:00 GMT This article is a sponsored by Bryntum

Team, assemble! Need to coordinate a crew of superheroes across the globe to save the world? A scheduler can help. Same for web applications that serve any group of collaborating users — whether that includes wearing a cape or not.

Event calendars and schedulers may often seem the same; sometimes, a calendar is all you need. However, a scheduler does a lot more than block out a day or time slot, especially for the “supervisors” and “managers”, i.e., the Nick Fury of the teams, who do more than give out assignments and due dates but also need to track the usage of available resources — like staffs’ work hours, or inanimate resources, like an operating room in a hospital — to ensure the efficiency of each task, resource, and process.

In this post, you’ll find some of the best commercial web scheduler libraries (JavaScript based) with amazing UX and high efficiency that are currently available. Let’s dive in right away, shall we?

Bryntum Scheduler

This super easy to integrate scheduler works seamlessly with any frontend stack such as Angular, React, Vue, or vanilla JS. The Bryntum website has examples in all the supported frameworks, and numerous sample use-cases to assist your integration process.

As for its appearance, if any of the modern themes provided with the scheduler won’t do so, you can quickly achieve a tailored look using CSS or SASS variable-based styling while still maintaining high-performance and responsive rendering throughout different screen sizes.

Now, the features are where the rubber meets the road. Drag and drop support; Export to Excel, PDF, and PNG; Custom data fields and filtering; Dependency arrows between events for process visualization; And many more. Here’s a comprehensive list of their scheduler features at Bryntum’s website.

That’s not all. There’s a Pro version. The highlight is the Resource Histogram — which is an intuitive overview of resource allocation. Additionally, there’s Travel time visualization, Overview of non-working hours, Skill matching, and more in the Pro tool chest. Plus, rewiring any of the default actions to your liking is an effortless undertaking due to the many configuration settings provided in the API.

Support is available by means of professional services, active forums, and a slew of exhaustive documentation comprising live demos. You can start a 45-day free trial right away or contact their sales team for any inquiries regarding pricing and features.

DHTMLX Scheduler

Here is another modern scheduler with the availability of integrations with over ten different frameworks, including some of the popular JavaScript ones like jQuery. The DHTMLX scheduler offers a default theme (in both light and dark mode) uses gradients to standout. However, if that’s not your cup of tea, and neither is any of the other themes, there’s also the option to tinker under the CSS hood to customize the look.

Apart from the fundamental features, like being able to set up recurring appointments, and the Pro features, there’s also a Google Maps integration to include rich location data to the events. In Pro version, there’s the multiple resource view — the overview of allocated resources — that comes handy in delegating tasks, and to track the efficiency of each delegation. PDF export is available as an add-on.

Support is accessible through technical requests, forums, and API documentation. A 30-day free trial can be opted. For licensing and pricing, you can drop them a message or schedule a call.

MindFusion Scheduler

Although not feature-extensive but budget-friendly, the MindFusion scheduler (ready to download and test) is a good place to start if its rudimentary components are enough to get the job done for you and you don’t mind rolling up your sleeves to do a little revamp.

Basics like appointments, multiple calendar views, drag-and-drop, timetables for resources, and such are covered. CSS themes allow for easy alterations according to your preferences. Schedule export/Import in XML and JSON is included. A few sample use-cases are also available for you to peruse.

For technical support, you can drop in a ticket, check out its forum, or even acquire any of its consulting services. They also offer discounts for eligible customers. Even though seemingly a little bland, their documentation is robust. For any queries, you can reach them through email or phone.

DayPilot Pro Scheduler

There’s an open-source version that you can build up from. However, the Pro version is something not to be missed out on. The period for technical support for the Pro version is less than that of what DayPilot ’s peers offer, but it doesn’t skimp on features and is within a reasonable price for a single developer plan.

Most features expected to be in a Pro version are included, like the Resources overview, Resources utilization snapshot, comprehensive filtering, drag and drop, export options, etc. Besides the handful of themes provided, custom styling can be done with CSS. Some samples are also available to learn from.

The scheduler works with most of the common JavaScript libraries, and documentations are present for each separately. Support is available through forums and email. You can contact them for quotes and licensing queries. There’s a 60-days trial available.

Syncfusion Scheduler

With a complete keyboard interaction setup, Syncfusion Scheduler is accessible and friendly. In addition to vanilla JavaScript, it’s also available for React, Angular, Vue, and Blazor frameworks.

All major scheduler features are present. Appointments, Timelines, and Agendas are covered. Export is available in Excel documents; export and import are possible with iCal documents.

Styling can be done by overriding the default CSS file. There’s a theme builder, too, to assist in creating a custom theme. And a few UI kits, like one for Adobe XD, is provided with the scheduler.

API documentation is available for all supported integrations. There’s a 30-day trial period, or you can check out some editable demos to try the product. Or you can contact Syncfusion through a form or any of the contacts provided at their site for more information. Technical support is provided via requests and forums.

Conclusion

For teams working remotely across the globe or together in an office, as well as for any group of collaborating users, a scheduler can be a valuable tool indeed. If you’d like to share a scheduler that you’ve tried and loved, please let us know in the comments. Happy scheduling!

]]>
hello@smashingmagazine.com (Preethi Sam)
<![CDATA[It’s Time To Talk About “CSS5”]]> https://smashingmagazine.com/2024/08/time-to-talk-about-css5/ https://smashingmagazine.com/2024/08/time-to-talk-about-css5/ Mon, 05 Aug 2024 10:00:00 GMT We have been talking about CSS3 for a long time. Call me a fossil, but I still remember the new border-radius property feeling like the most incredible CSS3 feature. We have moved on since we got border-radius and a slew of new features dropped in a single CSS3 release back in 2009.

CSS, too, has moved on as a language, and yet “CSS3” is still in our lexicon as the last “official” semantically-versioned release of the CSS language.

It’s not as though we haven’t gotten any new and exciting CSS features between 2009 and 2024; it’s more that the process of developing, shipping, and implementing new CSS features is a guessing game of sorts.

We see CSS Working Group (CSSWG) discussions happening in the open. We have the draft specifications and an archive of versions at our disposal. The resources are there! But the develop-ship-implement flow remains elusive and leaves many of us developers wondering: When is the next CSS release, and what’s in it?

This is a challenging balancing act. We have spec authors, code authors, and user agents working both interdependently and independently and the communication gaps are numerous and wide. The result? New features take longer to be implemented, leading to developers taking longer to adopt them. We might even consider CSS3 to be the last great big “marketing” push for CSS as a language.

That’s what the CSS-Next community is grappling with at this very moment. If you haven’t heard of the group, you’re not alone, but either way, it’s high time we shed light on it and the ideas coming from it. As someone participating in the group, I thought I would share the conversations we’re having and how we’re approaching the way CSS releases are communicated.

Meet The CSS-Next Community

Before we formally “meet” the CSS-Next group, it’s worth knowing that it is still officially referred to as the CSS4 Community Group as far as the W3C is concerned.

And that might be the very first thing you ought to know about CSS-Next: it is part of the W3C and consists of CSSWG members, developers, designers, user agents, and, really, anyone passionate about the web and who wants to participate in the discussion. W3C groups like CSS-Next are open to everyone to bring our disparate groups together, opening opportunities to shape tomorrow’s vision of the web.

CSS-Next, in particular, is where people gather to discuss the possibility of raising awareness of CSS evolutions during the last decade. At its core, the group is discussing approaches for bundling CSS features that have shipped since CSS3 was released in 2009 and how to name the bundle (or bundles, perhaps) so we have a way of referring to this particular “era” of CSS and pushing those features forward.

Why We Need A Group Like CSS-Next

Let’s go back a few years. More specifically, let’s return to the year 2020.

It all started when Safari Evangelist Jen Simmons posted an open issue in the CSSWG’s GitHub repo for CSS draft specifications requesting a definition for a “CSS4” release.

This might be one of the biggest responses — if not the biggest response — to a CSSWG issue based solely on emoji reactions.

The idea of defining CSS4 had some back-ups by Chris Coyier, Nicole Sullivan, and PPK. The idea is to push technologies forward and help educators and site owners, even if it’s just for the sake of marketing.

But why is this important? Why should we care about another level or “CSS Saga”? To get to that point, we might need to talk about CSS3 and what exactly it defines.

What Exactly Is “CSS3”?

The CSS3 grouping of features included level-3 specs for features from typography to selectors and backgrounds. From this point on, each CSS spec has been numbered individually.

However, CSS3 is still the most common term developers use to define the capabilities of modern CSS. We see this across the web, from the way educational institutions teach CSS to the job requirements on resumes.

The term CSS3 loses meaning year-over-year. You can see the dilution everywhere. The earliest CSS3 drafts were published in June 1999 — before many of my colleagues were even born — and yet CSS is one of the fastest-growing languages in the current webscape.

What About The CSS3 Logo?

When we look at job postings, we run into vacancies asking for knowledge of CSS3, which is over 10 years old. Without an updated level, we’re just asking if you’ve written CSS since the border-radius property came out. Furthermore, when we want to learn CSS, a CSS3 logo next to educational materials no longer signals current material. It kind of feels like time has stood still.

Here’s an example job posting that illustrates the issue:

But that’s not all. If you do a Google search on “Learn CSS” and check the images, you might be surprised how many CSS3 logos you can spot:

About 50% of the images show the CSS3 badge. To me, this clearly signals:

  1. People want badges or logos to aid in signaling skills.
  2. The CSS3 brand has made a large impact on the web ecosystem.
  3. The CSS3 logo has reached the end of its efficacy.

CSS3 had still has a huge impact on the ecosystem. The same logo is trying to say it teaches Flexbox all the way to color-mix() — a spread of hundreds of CSS features.

What Exactly Does “Modern CSS” Mean?

CSS3 and HTML5 were big improvements to those respective languages — we’ve come a long way since then. We have features that people didn’t even think were possible back in 2012 (when we officially spoke of CSS3 as a level).

For example, there was a time when people thought that containers didn’t know anything and it never be possible to style an element based on the width of its parent. But now, of course, we have CSS Container Queries, and all of this is possible today. The things that are possible with CSS changed over time, as so beautifully told by Miriam Suzanne at CSS Day 2023.

We do not want to ignore the success of CSS3 and say it is wrong; in fact, we believe it’s time to repeat the tremendous success of CSS3.

Imagine yourself 10 years from now reading a “modern” CSS feature that was introduced as many as 10 years ago. It wouldn’t add up, right? Modern is not a future-proof name, something that Geoff Graham opined when asking the correct question, “What exactly is ‘Modern CSS’?

Naming is always hard, yet it’s just something we have to do in CSS to properly select things. I think it’s time we start naming [CSS releases] like this, too. It’s only a matter of time before “modern” isn’t “modern” anymore.”

— Geoff Graham

This is exactly where the CSS-Next community group comes in.

Let’s Talk About “CSS Eras”

The CSS-Next community group aims to align and modernize the general understanding of CSS in the wider developer community by labeling feature sets that have shipped since the initial set of CSS3 features, helping developers upskill their understanding of CSS across the ecosystem.

Why Isn’t This Part Of The Web Platform Baseline?

The definition of what is “current” CSS changes with time. Sometimes, specs are incomplete or haven’t even been drafted. While Baseline looks at the current browser support of a feature in CSS, we want to take a look at the evolution of the language itself. The CSS levels should not care about which browser implemented it first.

It might be more nuanced than this in reality, but that’s pretty much the gist. We also don’t want it to become another “modern CSS” bucket. Indeed, referring to CSS3 as an “era” has helped compartmentalize how we can shift into CSS4, CSS5, and beyond. For example, labeling something as a “CSS4” feature provides a hint as far as when that feature was born. A feature that reaches “baseline” meanwhile merely indicates the status of that feature’s browser implementation, which is a separate concern.

Identifying features by era and implementation status are both indicators and provide meta information about a CSS feature but with different purposes.

Why Not Work With An Annual Snapshot Instead Of A Numbered Era?

It’s fair to wonder if a potential solution is to take a “snapshot” of the CSS feature set each year and use that as a mile marker for CSS feature releases. However, an annual picture of the language is less effective than defining a particular era in which specific features are introduced.

There were a handful of years when CSS was relatively quiet compared to the mad dash of the last few years. Imagine a year in which nothing, or maybe very few, CSS features are shipped, and the snapshot for that year is nearly identical to the previous year’s snapshot. Now imagine CSS explodes the following year with a deluge of new features that result in a massive delta between snapshots. It takes mental agility to compare complete snapshots of the entire language and find what’s new.

Goals And Non-Goals

I think I’ve effectively established that the term “CSS” alone isn’t clear or helpful enough to illustrate the evolution of the CSS, just as calling a certain feature “modern” degrades over time.

Grouping features in levels that represent different eras of releases — even from a marketing standpoint — offers a good deal of meaning and has a track record of success, as we’ve seen with CSS3.

All of this comes back to a set of goals that the CSS-Next group is rallying around:

  • Help developers learn CSS.
  • Help educators teach CSS.
  • Help employers define modern web skills.
  • Help the community understand the progression of CSS capabilities over time.
  • Create a shared vernacular for describing how CSS evolves.

What we do not want is to:

  • Affect spec definitions.
    CSS-Next is not a group that would define the working process of or influence working groups such as the CSSWG.
  • Create official developer documentation.
    Making something like a new version of MDN doesn’t get us closer to a better understanding of how the language changes between eras.
  • Define browser specification work.
    This should be conducted in relevant standardization or pre-standardization forums (such as the CSSWG or OpenUI).
  • Educate developers on CSS best practices.
    That has much more to do with feature implementations than the features themselves.
  • Manage browser compatibility data.
    Baseline is already doing that, and besides, we’ve already established that feature specifications and implementations are separate concerns.

This doesn’t mean that everything in the last list is null and void. We could, for example, have CSS eras that list all the features specced in that period. And inside that list, there could be a baseline reference for the implementations of those features, making it easier to bring forward some ideas for the next Interop, which informs Baseline.

This leaves the CSS-Next group with a super-clear focus to:

  • Research the community’s understanding of modern CSS,
  • Build a shared understanding of CSS feature evolution since CSS3,
  • Grouping those features into easily-digestible levels (i.e., CSS4, CSS5, and so on), and
  • Educate the community about modern CSS features.

We’d Likely Start With The “CSS5” Era

A lot of thought and work has gone into the way CSS is described in eras. The initial idea was to pick up where CSS3 left off and jump straight into CSS4. But the number of features released between the two eras would be massive, even if we narrowed it down to just the features released since 2020, never mind 2009.

It makes sense, instead, to split the difference and call CSS4 a done deal as of, say, 2018 and a fundamental part of CSS in its current state as we begin with the next logical period: CSS5.

Here’s how the definitions are currently defined:

CSS3 (~2009-2012):
Level 3 CSS specs as defined by the CSSWG. (immutable)

CSS4 (~2013-2018):
Essential features that were not part of CSS3 but are already a fundamental part of CSS.

CSS5 (~2019-2024):
Newer features whose adoption is steadily growing.

CSS6 (~2025+):
Early-stage features that are planned for future CSS.

Uncle Sam CSS Wants You!

We released a request for comments last May for community input from developers like you. We’ve received a few comments that have been taken into account, but we need much more feedback to help inform our approach.

We want a big representative response from the community! But that takes awareness, and we need you to make that happen. Anything you can do to let your teams and colleagues that the CSS-Next group is a thing and that we’re trying to solve the way we talk about CSS features is greatly appreciated. We want to know what you and others think about the things we’re wrestling with, like whether or not the way we’re grouping eras above is a sound approach, where you think those lines should be drawn, and if you agree that we’re aiming for the right goals.

We also want you to participate. Anyone is welcome to join the CSS-Next group and we could certainly use help brainstorming ideas. There’s even an incubation group that conducts a biweekly hour-long session that takes place on Mondays at 8:00 a.m. Pacific Time (2:00 p.m. GMT).

On a completely personal note, I’d like to add that I joined the CSS-Next group purely out of interest but became much more actively involved once the mission became very clear to me. As a developer working in an agency, I see how fast CSS changes and have struggled, like many of you, to keep up.

A seasoned colleague of mine commented the other day that they wouldn’t even know how to approach vanilla CSS on a fresh website project. There is no shame in that! I know many of us feel the same way. So, why not bring it to marketing terms and figure out a better way to frame discussions about CSS features based on eras? You can help get us there!

And if you think I’m blameless when it comes to talking about CSS in generic “modern” terms, all it takes is a quick look at the headline of another Smashing article I authoredthis year!

Let’s get going with CSS5 and spread the word! Let me hear your thoughts.

Resources

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[If I Was Starting My Career Today: Thoughts After 15 Years Spent In UX Design (Part 1)]]> https://smashingmagazine.com/2024/08/thoughts-after-15-years-spent-ux-design-part1/ https://smashingmagazine.com/2024/08/thoughts-after-15-years-spent-ux-design-part1/ Fri, 02 Aug 2024 11:00:00 GMT My design career began in 2008. The first book that I read on the topic of design was Photoshop Tips And Tricks by Scott Kelby, which was a book about a very popular design tool, but not about user experience (UX) design itself. Back at the time, I didn’t know many of the approaches and techniques that even junior designers know today because they weren’t invented yet, and also because I was just beginning my learning journey and finding my way in UX design. But now I have diverse experience; I’m myself hiring designers for my team, and I know much more.

In my two-part series of articles, I’ll try to share with you what I wish I knew if I was starting my career today.

“If you want to go somewhere, it is best to find someone who has already been there.”

Robert Kiyosaki

The two-part series contains four sections, each roughly covering one key stage in your beginner career:

  1. Master Your Design Tools
  2. Work on Your Portfolio
  3. Preparing for Your First Interviews: Getting a First Job
  4. In Your New Junior UX Job: On the Way to Grow

I’ll cover the first three topics in this first article and the fourth one in the second article. In addition, I will include very detailed Further Reading sections at the end of each part.

When you’re about to start learning, every day, you will receive new pieces of evidence of how many things you don’t know yet. You will see people who have been doing this for years and you will doubt whether you can do this, too. But there is a nuance I want to highlight: first, take a look at the following screenshot:

This is the Amazon website in 2008 when I was about to start my design career and received my first paycheck as a beginner designer.

And this is how Amazon looked like even earlier, in 2002:

Source: versionmuseum.com. (Large preview)

In 2002, Amazon made 3.93 billion US dollars in profits. I dare say they could have hired the very best designers at the time. So today, when you speak to a designer with twenty years of experience and think, “Oh, this designer must be on a very high level now, a true master of his craft,” remind yourself about the state of UX design that existed when the designer’s career was about to start, sometime in the early 2000s!

A lot of the knowledge that I have learned and that is over five years old is outdated now, and the learning complexity only increases every year.

It doesn’t matter how many years you have been in this profession; what matters are the challenges you met in the last few years and the lessons you’ve learned from them.

Are you a beginner or an aspiring user interface/user experience designer? Don’t be afraid to go through all the steps in your UX design journey. Patience and a good plan will let you become a good designer faster than you think.

“The best time to start was yesterday. The next best time is now.”

H. Jackson Brown, Jr.

This was the more philosophical part of my writing, where I wanted to help you become better motivated. Now, let’s continue with the more practical things and advice!

Getting Started: Master Your Design Tools

When I was just beginning to learn, most of us did our design work in Adobe Photoshop.

In Photoshop, there were no components, styles, design libraries, auto layouts, and so on. Every screen was in another PSD file, and even making rounded corners on a rectangle object was a difficult task. Files were “heavy,” and sometimes I needed to wait thirty or more seconds to open a file and check what screen was inside while changing a button’s name or label in twenty separate PSD files (each containing only one design screen, remember?) could take up to an hour, depending on the power of your computer.

There were many digital design tools at the time, including Fireworks — which some professionals considered superior to Photoshop, and for quite a few reasons — but this is not the main point of my story. One way or another, Photoshop back then became very popular among designers and we all absolutely had to have it in our toolset, no matter what other tools we also needed and used.

Now computers are much faster, and our design tools have evolved quite a bit, too. For example, I can apply multiple changes to multiple design screens in just a few seconds by using Figma components and a proper structure of the design file, I can design/prototype responsive designs by using auto-layout, and more.

In one word, knowing your design tool can be a real “superpower” for junior UX designers — a power that beginners often ignore. When you know your tool inside-out, you’ll spend less time on the design routine and you’ll have more time for learning new things.

Master your tool(s) of choice (be it Figma Design or Illustrator, Sketch, Affinity Designer, Canva, Framer, and so on) in the most efficient way, and free up to a couple of extra hours every day for reading, doing tutorials, or taking longer breaks.

Learn all the key features and options, and discover and remember the most important hotkeys so you’ll be working without the need to constantly reach for your mouse and navigate the “web” of menus and sub-menus. It’s no secret that we, designers, mostly learn through doing practical tasks. So, imagine how much time it would save you within a few years of your career!

Also, it’s your chance: developers are rolling out new features for beginner designers and pro designers simultaneously, but junior designers usually have more time to learn those updates! So, be faster and get your advantage!

Getting Started: Work On Your Portfolio

You need to admit it: your portfolio (or, to put it more precisely, the lack of it) will be the main pain point at the start.

You may hear sometimes statements such as: “We understand that being a junior designer is not about having a portfolio...” But the fact is that we all would like to see some results of your work, even if it is your very early work on a few design projects or concepts. Remember, if you have something to show, this would always be a considerable advantage!

I have heard from some juniors that they don’t want to invest time in their portfolio because this work is not payable and it’s time-consuming. But sitting and waiting and getting rejected again and again is also time-consuming. And spending a few of your first career years in the wrong company is also time-consuming (and disappointing, too). So my advice is to spend some time in advance on showcasing your work and then get much better results in the near future.

In case you need some extra motivation, here is a quote from Muhammad Ali, regarded as one of the most significant sports figures of the 20th century:

“I hated every minute of training, but I said to myself, ‘Do not quit. Suffer now and live the rest of your life as a champion.’”

— Muhammad Ali

Ready to fire but have no idea where to start? Here are a few options:

  • Find a popular product with a rather difficult-to-use or not very elegant interface and research what the users of this product are complaining about the most. Then, as an exercise, design a few interface screens for this product, with their core features explained, publish them on social media, and tag that company. (This approach may not always work, but it’s worth a try.)
  • Sign up for and actively participate in hackathons. As a result, it’s possible that you may get not just a few screens redesigned in Figma but a real working product you can show (and be proud of). Also, you can meet nice people there who may recommend you if you apply for a job at one of the companies they work for.
  • Complete UXchallenge challenges and present how you solved them on LinkedIn.
    Note: You’re not limited to LinkedIn, of course; you can also use Instagram, Facebook, Behance, Dribbble, and so on. But keep in mind that many recruiters prefer LinkedIn.
  • Pick up a website that you use often and check whether it meets the “Ten Usability Heuristics for User Interface Design.” Create a detailed report that lists everything that can be (re)designed better. Publish the report on LinkedIn and also send it to the company that made this website. Don’t forget to tell them why you did that report for their website specifically and that you’re learning UX design, practicing, and actively looking for a job.
  • Visit some popular developer conferences where you would be one of the only designers attending. Talk to people and propose your help for their startups. Who knows, you may become the co-creator of some future cool startup!
  • Choose an area where digitalization hasn’t propagated yet and create a design concept using very modern technologies. For instance, people have been growing plants for thousands of years, but data analysis and visualization dramatically changed the efficiency of that process only lately. The agricultural industry has undergone a remarkable transformation thanks to UX design — a crucial element in ensuring that agricultural applications are not just functional but also intuitive and user-friendly. From precision farming to crop monitoring systems, digital tools have revolutionized the way farmers manage their operations.
    Note: You can check the following article for details: “The Evolution of UX Design in Agricultural Applications.”

Don’t wait until someone hands you your chance on a “silver platter.” There are many projects that need the designer’s hands and help but can’t get such help yet. Assist them and then show the results of your work in your first portfolio. It gives you a huge advantage over other candidates who haven’t worked on their portfolios yet!

Preparing For Your First Interviews: Getting A First Job

From what I’ve heard, getting the first job is the biggest problem for a junior designer, so I will focus on this part in more detail.

Applying For A Job

To reach the goal, you should formulate it correctly. It’s already formulated in this case, but most candidates understand it wrong. The right goal here is to be invited to an interview — not to get an offer right now or tell everything about your life in the CV document. You just need to break through the first level of filtering.

Note: Some of these tips are for absolute beginners. Do they sound too obvious to you? Apologies if so. However, all of them are based on my personal experience, so I think there are no tips that I should omit.

  • Send your CV and motivational letter (if required in the job description) from the correct email address. It’s always strange to receive a job application from an email such as ‘sad.batman2006@gmail.com’. Seniors are always responsible for the tasks that junior designers complete, and we want to know that you are a seriously-minded and responsible person to help us do our work. Small details, such as the email address you would use to get in touch, do matter.
  • Use your real name. I’ve had cases where people have used different names in their emails and CVs. I think it’s too obvious why this will look very strange, so I won’t spend time describing it in detail.
  • Skill representations. Use the well-accepted standards. I have seen some CVs created with the help of services such as CV Maker where skills (level of English, how well you know Figma, Illustrator, and other design tools, and so on) were represented as loaders or diagrams. But there are existing standards, so use them in order to be understood better. For instance, if you describe your level of English knowledge, use the CEFR levels (A1/A2, B1/B2, C1/C2). Don’t make people interpret a diagram instead.
  • Check/proofread the text in your email, CV, and portfolio. We expect that you may not know everything about design, but spelling errors don’t demonstrate exactly your desire to learn and your attention to detail. You can use Grammarly or ChatGPT to check your text, but you should not try to substitute your thoughts with some AI-“generated” ideas. Also, make sure to structure well the content of your CV and to format it properly.
  • Read the job description carefully, find matches with your skills, and reflect these in the CV. Recruiters cannot review all the CVs thoroughly. Remember, the goal is to break through the first level of filtering — the recruiter is not a designer and can’t evaluate you and your skills. However, the recruiter can decide whether your CV is relevant to the job description, so it’s very important to tweak the CV by making sure you mention all the skills that you possess and that match the ones found in the specific job description.
  • Don’t count solely on the job application form posted on the company’s website. There were cases when I had no reply after filling out and submitting the official application form but then got an offer after trying to reach a recruiter from that company directly on LinkedIn or via some other available communication channel. So don’t be shy to get in touch directly.
  • Avoid using PDF documents for portfolios or anything else that people need to download before opening. The more time it takes to open and review your portfolio, the less time people will spend checking what’s in it. A link to your portfolio on the web will always work better, and it’s also a much more professional approach! You can use platforms such as Behance (or similar), or you can create your case studies in Figma and paste the shareable link into your CV.

Note: There are many ways to show your portfolio, and Figma is only one of them. For ideas, you can check “Figma Portfolio Templates & Examples” (a curated selection of portfolio templates for Figma). Or even better, you can self-host your portfolio on your own domain and website; however, this may require some more advanced technical skills and knowledge, so you can leave this idea for later.

Completing A Test Task

The test task aims to assess what we can expect from you in the workplace. And this is not just about the quality of your design skills — it’s also about how you will communicate with others and how you will be able to propose practical solutions to problems.

What do I mean by “practical solutions”? In the real world, designers always work within certain limitations (constraints), such as time, budget, team capacity, and so on. So, if you have some bright ideas that are likely very hard to implement, keep these for the interview. The test task is a way to show that you are someone who can define the correct problems and do the proper work, e.g. find the solutions to them.

A few words of advice on how to do exactly that:

  • If you have a chance to speak to the target audience, do it, especially if the test task is to make an existing product better. You don’t have to do complete research, but if it’s a popular product that everyone uses, you can ask your friends about their experience of using it. If it’s not, check what people say on Reddit, in reviews on the Apple App Store, or on Google Play. Find video reviews of this product on YouTube and analyze the comments under the video. Also, take a look at similar products and what people say about them. Defining real problems is a key skill for designers.
    Note: How can we we conduct UX research when there is no or only limited access to users? Vitaly Friedman outlines a few excellent strategies in his article on this topic: “Why Designers Aren’t Understood.”
  • Prioritize features that you see and can reflect on in the test task. You can use the Kano model or another framework, but don’t skip this step! It is sometimes puzzling to see candidates spending a lot of time on dark mode UI mockups but failing to work on the required key features instead.
    Note: The Kano Analysis model is a tool that enables you to understand how customer emotional responses to products or features can be measured and explored.
  • If you need more time, say so. It also will show what your behavior will be when working on a real project. Speaking about the problem at the last moment can bring big troubles to the team. Also — happened in my practice in a few cases — it’s strange to hear:
    • “I didn’t fully complete the test task because I was busy.”
    • OK, if you are too busy (with other things?), then we will have to interview some other candidates.
      My advice is to show dedication and focus toward your current job application assignment.
  • In some cases, the candidates try to go the “extra mile” by doing more things than were initially asked of them, but with lower quality. Unfortunately, It doesn’t work this way. Instead, you need to do less but better. Of course, there could be exclusions in some cases, like when you do sketching and prototyping, where showing rough ideas is perfectly OK. So, try to find the balance between the volume and quality of your work. Showing many (but weak) mockups in order to impress with the volume of your work (instead of the quality) is not a good idea.
  • Sometimes, we ask to redesign a screen as a test task. This is not about using better/shinier UI components. Instead, try to understand the user goals on that screen and then think about the most suitable UI components that you can use to serve these user goals.

Recommendations For The Interview

The interview is the most challenging part because the most optimal way to prepare for it depends on the specific company where you’re applying for the job and the interviewer’s experience. But there are still a few “universal” things you can do in order to increase your chances:

  • If I was restricted to giving only one piece of advice, I would say: Be sincere! It’s not an exam, so don’t try to guess the answer if you don’t know it. No one knows everything, and it’s OK — be honest and it will pay off.
  • Research the company and the role before the interview. Check the company’s portfolio, cases, products, and so on, and even look up the names and titles of designers working there.
    Note: It will help a lot if the company has an AboutThe Team page on its website; but if not, using LinkedIn will probably help, too.) When you have researched the role in detail, it will help you define which of your skills will be a good match and you could then highlight them during the interview.
  • The core questions in a UX design interview are not a secret. Usually, it’s about the design phases, your experience, hobbies, motivation, and so on. Work on these questions and clarify the answers before going to the interview. Just write them down and read them out loudly. Try to check how it sounds. Converting your design experience into exact words requires brain energy, especially if somebody in front of you is waiting for the answer, so do it beforehand, and you’ll feel much better prepared — and calmer.
  • Listen carefully to the questions you are being asked. Ask the interviewer to clarify if you do not understand a question completely. It’s always weird when the candidate gives an answer that is not related to the question you asked.
  • Don’t be late. Do your best to be on time.
    • If it’s an online interview, check the time zones, the communication tools, and everything else. There’s nothing worse than starting Zoom (or another app that you know you’ll need) at the last minute and discovering that it needs an urgent update. Precious minutes will be lost during the update process while the other party will be patiently waiting for you to come online. And you better also check your headphones, microphone, camera, and Bluetooth connection before the start of the meeting.
    • Similarly, if it’s an in-person interview, plan your trip in advance and add some extra time for something unexpected; better if you arrive early than late. The problem is not only about wasting someone’s time; it’s about your emotional balance. If you are late, you will be nervous and make mistakes that you otherwise wouldn’t.
  • Don’t look for a job in the companies of your dreams right from the start. First, pass a few interviews with other companies, get feedback, do some retrospectives, gain some real experience, and be prepared to show your best when you get your chance.
  • Be yourself, but also clearly communicate who you are going to be as people with goals and a plan always make a better impression. Most companies don’t hire juniors — they hire future middle-level and senior designers. And if you feel a certain company where you’re applying for a job would not support you in this way, better try another one. The first few years are the foundation of your future career, so do your best to get into a company where you can grow as a designer.
Conclusion

Thank you for following me so far! Hopefully, you have learned your design tools, worked on your portfolio, and prepared meticulously for your first interviews. If all goes according to plan, sooner or later, you’ll get your first junior UX job. And then you’ll face more challenges, about which I will speak in detail in the second part of my two-part article series.

But before that, do check Further Reading, where I have gathered a few resources that will be very useful if you are just about to begin your UX design career.

Further Reading

Basic Design Resources

A List of Design Resources from the Nielsen Norman Group

]]>
hello@smashingmagazine.com (Andrii Zhdan)
<![CDATA[How To Build A Multilingual Website With Nuxt.js]]> https://smashingmagazine.com/2024/08/how-build-multilingual-website-nuxt-i18n/ https://smashingmagazine.com/2024/08/how-build-multilingual-website-nuxt-i18n/ Thu, 01 Aug 2024 15:00:00 GMT This article is a sponsored by Hygraph

Internationalization, often abbreviated as i18n, is the process of designing and developing software applications in a way that they can be easily adapted to various spoken languages like English, German, French, and more without requiring substantial changes to the codebase. It involves moving away from hardcoded strings and techniques for translating text, formatting dates and numbers, and handling different character encodings, among other tasks.

Internationalization can give users the choice to access a given website or application in their native language, which can have a positive impression on them, making it crucial for reaching a global audience.

What We’re Making

In this tutorial, we’re making a website that puts these i18n pieces together using a combination of libraries and a UI framework. You’ll want to have intermediate proficiency with JavaScript, Vue, and Nuxt to follow along. Throughout this article, we will learn by examples and incrementally build a multilingual Nuxt website. Together, we will learn how to provide i18n support for different languages, lazy-load locale messages, and switch locale on runtime.

After that, we will explore features like interpolation, pluralization, and date/time translations.

And finally, we will fetch dynamic localized content from an API server using Hygraph as our API server to get localized content. If you do not have a Hygraph account please create one for free before jumping in.

As a final detail, we will use Vuetify as our UI framework, but please feel free to use another framework if you want. The final code for what we’re building is published in a GitHub repository for reference. And finally, you can also take a look at the final result in a live demo.

The nuxt-i18n Library

nuxt-i18n is a library for implementing internationalization in Nuxt.js applications, and it’s what we will be using in this tutorial. The library is built on top of Vue I18n, which, again, is the de facto standard library for implementing i18n in Vue applications.

What makes nuxt-i18n ideal for our work is that it provides the comprehensive set of features included in Vue I18n while adding more functionalities that are specific to Nuxt, like lazy loading locale messages, route generation and redirection for different locales, SEO metadata per locale, locale-specific domains, and more.

Initial Setup

Start a new Nuxt.js project and set it up with a UI framework of your choice. Again, I will be using Vue to establish the interface for this tutorial.

Let us add a basic layout for our website and set up some sample Vue templates.

First, a “Blog” page:

<!-- pages/blog.vue -->
<template>
  <div>
    <v-card color="cardBackground">
      <v-card-title class="text-overline">
        Home
      </v-card-title>
      <v-card-text>
        This is the home page description
      </v-card-text>
    </v-card>
  </div>
</template>

Next, an “About” page:

<!-- pages/about.vue -->
<template>
  <div>
    <v-card color="cardBackground">
      <v-card-title class="text-overline">
        About
      </v-card-title>
      <v-card-text>
        This is the about page description
      </v-card-text>
    </v-card>
  </div>
</template>

This gives us a bit of a boilerplate that we can integrate our i18n work into.

Translating Plain Text

The page templates look good, but notice how the text is hardcoded. As far as i18n goes, hardcoded content is difficult to translate into different locales. That is where the nuxt-i18n library comes in, providing the language-specific strings we need for the Vue components in the templates.

We’ll start by installing the library via the command line:

npx nuxi@latest module add i18n

Inside the nuxt.config.ts file, we need to ensure that we have @nuxtjs/i18n inside the modules array. We can use the i18n property to provide module-specific configurations.

// nuxt.config.ts
export default defineNuxtConfig({
  // ...
  modules: [
    ...
    "@nuxtjs/i18n",
    // ...
  ],
  i18n: {
    // nuxt-i18n module configurations here
  }
  // ...
});

Since the nuxt-i18n library is built on top of the Vue I18n library, we can utilize its features in our Nuxt application as well. Let us create a new file, i18n.config.ts, which we will use to provide all vue-i18n configurations.

// i18n.config.ts
export default defineI18nConfig(() => ({
  legacy: false,
  locale: "en",
  messages: {
    en: {
      homePage: {
        title: "Home",
        description: "This is the home page description."
      },
      aboutPage: {
        title: "About",
        description: "This is the about page description."
      },
    },
  },
}));

Here, we have specified internationalization configurations, like using the en locale, and added messages for the en locale. These messages can be used inside the markup in the templates we made with the help of a $t function from Vue I18n.

Next, we need to link the i18n.config.ts configurations in our Nuxt config file.

// nuxt.config.ts
export default defineNuxtConfig({
  ...
  i18n: {
    vueI18n: "./i18n.config.ts"
  }
  ...
});

Now, we can use the $t function in our components — as shown below — to parse strings from our internationalization configurations.

Note: There’s no need to import $t since we have Nuxt’s default auto-import functionality.

<!-- i18n.config.ts -->
<template>
  <div>
    <v-card color="cardBackground">
      <v-card-title class="text-overline">
        {{ $t("homePage.title") }}
      </v-card-title>
      <v-card-text>
        {{ $t("homePage.description") }}
      </v-card-text>
    </v-card>
  </div>
</template>

Lazy Loading Translations

We have the title and description served from the configurations. Next, we can add more languages to the same config. For example, here’s how we can establish translations for English (en), French (fr) and Spanish (es):

// i18n.config.ts
export default defineI18nConfig(() => ({
  legacy: false,
  locale: "en",
  messages: {
    en: {
      // English
    },
    fr: {
      // French
    },
    es: {
      // Spanish
    }
  },
}));

For a production website with a lot of content that needs translating, it would be unwise to bundle all of the messages from different locales in the main bundle. Instead, we should use the nuxt-i18 lazy loading feature asynchronously load only the required language rather than all of them at once. Also, having messages for all locales in a single configuration file can become difficult to manage over time, and breaking them up like this makes things easier to find.

Let’s set up the lazy loading feature in nuxt.config.ts:

// etc.
  i18n: {
    vueI18n: "./i18n.config.ts",
    lazy: true,
    langDir: "locales",
    locales: [
      {
        code: "en",
        file: "en.json",
        name: "English",
      },
      {
        code: "es",
        file: "es.json",
        name: "Spanish",
      },
      {
        code: "fr",
        file: "fr.json",
        name: "French",
      },
    ],
    defaultLocale: "en",
    strategy: "no_prefix",
  },

// etc.

This enables lazy loading and specifies the locales directory that will contain our locale files. The locales array configuration specifies from which files Nuxt.js should pick up messages for a specific language.

Now, we can create individual files for each language. I’ll drop all three of them right here:


// locales/en.json
{
  "homePage": {
    "title": "Home",
    "description": "This is the home page description."
  },
  "aboutPage": {
    "title": "About",
    "description": "This is the about page description."
  },
  "selectLocale": {
    "label": "Select Locale"
  },
  "navbar": {
    "homeButton": "Home",
    "aboutButton": "About"
  }
}
// locales/fr.json
{
  "homePage": {
    "title": "Bienvenue sur la page d'accueil",
    "description": "Ceci est la description de la page d'accueil."
  },
  "aboutPage": {
    "title": "À propos de nous",
    "description": "Ceci est la description de la page à propos de nous."
  },
  "selectLocale": {
    "label": "Sélectionner la langue"
  },
  "navbar": {
    "homeButton": "Accueil",
    "aboutButton": "À propos"
  }
}
// locales/es.json
{
  "homePage": {
    "title": "Bienvenido a la página de inicio",
    "description": "Esta es la descripción de la página de inicio."
  },
  "aboutPage": {
    "title": "Sobre nosotros",
    "description": "Esta es la descripción de la página sobre nosotros."
  },
  "selectLocale": {
    "label": "Seleccione el idioma"
  },
  "navbar": {
    "homeButton": "Inicio",
    "aboutButton": "Acerca de"
  }
}

We have set up lazy loading, added multiple languages to our application, and moved our locale messages to separate files. The user gets the right locale for the right message, and the locale messages are kept in a maintainable manner inside the code base.

Switching Between Languages

We have different locales, but to see them in action, we will build a component that can be used to switch between the available locales.

<!-- components/select-locale.vue -->
<script setup>
const { locale, locales, setLocale } = useI18n();

const language = computed({
  get: () => locale.value,
  set: (value) => setLocale(value),
});
</script>

<template>
  <v-select
    :label="$t('selectLocale.label')"
    variant="outlined"
    color="primary"
    density="compact"    
    :items="locales"
    item-title="name"
    item-value="code"
    v-model="language"
  ></v-select>
</template>

This component uses the useI18n hook provided by the Vue I18n library and a computed property language to get and set the global locale from a <select> input. To make this even more like a real-world website, we’ll include a small navigation bar that links up all of the website’s pages.

<!-- components/select-locale.vue -->
<template>
  <v-app-bar app :elevation="2" class="px-2">
    <div>
      <v-btn color="button" to="/">
        {{ $t("navbar.homeButton") }}
      </v-btn>
      <v-btn color="button" to="/about">
        {{ $t("navbar.aboutButton") }}
      </v-btn>
    </div>
    <v-spacer />
    <div class="mr-4 mt-6">
      <SelectLocale />
    </div>
  </v-app-bar>
</template>

That’s it! Now, we can switch between languages on the fly.

We have a basic layout, but I thought we’d take this a step further and build a playground page we can use to explore more i18n features that are pretty useful when building a multilingual website.

Interpolation and Pluralization

Interpolation and pluralization are internationalization techniques for handling dynamic content and grammatical variations across different languages. Interpolation allows developers to insert dynamic variables or expressions into translated strings. Pluralization addresses the complexities of plural forms in languages by selecting the appropriate grammatical form based on numeric values. With the help of interpolation and pluralization, we can create more natural and accurate translations.

To use pluralization in our Nuxt app, we’ll first add a configuration to the English locale file.

// locales/en.json
{
  // etc.
  "playgroundPage": {
    "pluralization": {
      "title": "Pluralization",
      "apple": "No Apple | One Apple | {count} Apples",
      "addApple": "Add"
    }
  }
  // etc.
}

The pluralization configuration set up for the key apple defines an output — No Apple — if a count of 0 is passed to it, a second output — One Apple — if a count of 1 is passed, and a third — 2 Apples, 3 Apples, and so on — if the count passed in is greater than 1.

Here is how we can use it in your component: Whenever you click on the add button, you will see pluralization in action, changing the strings.

<!-- pages/playground.vue -->
<script setup>
let appleCount = ref(0);
const addApple = () => {
  appleCount.value += 1;
};
</script>
<template>
  <v-container fluid>
    <!-- PLURALIZATION EXAMPLE  -->
    <v-card color="cardBackground">
      <v-card-title class="text-overline">
        {{ $t("playgroundPage.pluralization.title") }}
      </v-card-title>

      <v-card-text>
        {{ $t("playgroundPage.pluralization.apple", { count: appleCount }) }}
      </v-card-text>
      <v-card-actions>
        <v-btn
          @click="addApple"
          color="primary"
          variant="outlined"
          density="comfortable"
          >{{ $t("playgroundPage.pluralization.addApple") }}</v-btn
        >
      </v-card-actions>
    </v-card>
  </v-container>
</template>

To use interpolation in our Nuxt app, first, add a configuration in the English locale file:

// locales/en.json
{
  ...
  "playgroundPage": {
    ... 
    "interpolation": {
      "title": "Interpolation",
      "sayHello": "Hello, {name}",
      "hobby": "My favourite hobby is {0}.",
      "email": "You can reach out to me at {account}{'@'}{domain}.com"
    },
    // etc. 
  }
  // etc.
}

The message for sayHello expects an object passed to it having a key name when invoked — a process known as named interpolation.

The message hobby expects an array to be passed to it and will pick up the 0th element, which is known as list interpolation.

The message email expects an object with keys account, and domain and joins both with a literal string "@". This is known as literal interpolation.

Below is an example of how to use it in the Vue components:

<!-- pages/playground.vue -->
<template>
  <v-container fluid>
    <!-- INTERPOLATION EXAMPLE  -->
    <v-card color="cardBackground">
      <v-card-title class="text-overline">
        {{ $t("playgroundPage.interpolation.title") }}
      </v-card-title>
      <v-card-text>
        <p>
          {{
            $t("playgroundPage.interpolation.sayHello", {
              name: "Jane",
            })
          }}
        </p>
        <p>
          {{
            $t("playgroundPage.interpolation.hobby", ["Football", "Cricket"])
          }}
        </p>
        <p>
          {{
            $t("playgroundPage.interpolation.email", {
              account: "johndoe",
              domain: "hygraph",
            })
          }}
        </p>
      </v-card-text>
    </v-card>
  </v-container>
</template>

Date & Time Translations

Translating dates and times involves translating date and time formats according to the conventions of different locales. We can use Vue I18n’s features for formatting date strings, handling time zones, and translating day and month names for managing date time translations. We can give the configuration for the same using the datetimeFormats key inside the vue-i18n config object.

// i18n.config.ts
export default defineI18nConfig(() => ({
  fallbackLocale: "en",
  datetimeFormats: {
    en: {
      short: {
        year: "numeric",
        month: "short",
        day: "numeric",
      },
      long: {
        year: "numeric",
        month: "short",
        day: "numeric",
        weekday: "short",
        hour: "numeric",
        minute: "numeric",
        hour12: false,
      },
    },
    fr: {
      short: {
        year: "numeric",
        month: "short",
        day: "numeric",
      },
      long: {
        year: "numeric",
        month: "short",
        day: "numeric",
        weekday: "long",
        hour: "numeric",
        minute: "numeric",
        hour12: true,
      },
    },
    es: {
      short: {
        year: "numeric",
        month: "short",
        day: "numeric",
      },
      long: {
        year: "2-digit",
        month: "short",
        day: "numeric",
        weekday: "long",
        hour: "numeric",
        minute: "numeric",
        hour12: true,
      },
    },
  },
}));

Here, we have set up short and long formats for all three languages. If you are coding along, you will be able to see available configurations for fields, like month and year, thanks to TypeScript and Intellisense features provided by your code editor. To display the translated dates and times in components, we should use the $d function and pass the format to it.

<!-- pages.playground.vue -->
<template>
  <v-container fluid>
    <!-- DATE TIME TRANSLATIONS EXAMPLE  -->
    <v-card color="cardBackground">
      <v-card-title class="text-overline">
        {{ $t("playgroundPage.dateTime.title") }}
      </v-card-title>
      <v-card-text>
        <p>Short: {{ (new Date(), $d(new Date(), "short")) }}</p>
        <p>Long: {{ (new Date(), $d(new Date(), "long")) }}</p>
      </v-card-text>
    </v-card>
  </v-container>
</template>

Localization On the Hygraph Side

We saw how to implement localization with static content. Now, we’ll attempt to understand how to fetch dynamic localized content in Nuxt.

We can build a blog page in our Nuxt App that fetches data from a server. The server API should accept a locale and return data in that specific locale.

Hygraph has a flexible localization API that allows you to publish and query localized content. If you haven’t created a free Hygraph account yet, you can do that on the Hygraph website to continue following along.

Go to Project SettingsLocales and add locales for the API.

We have added two locales: English and French. Now we need aq localized_post model in our schema that only two fields: title and body. Ensure to make these “Localized” fields while creating them.

Add permissions to consume the localized content, go to Project settingsAccessAPI AccessPublic Content API, and assign Read permissions to the localized_post model.

Now, we can go to the Hygrapgh API playground and add some localized data to the database with the help of GraphQL mutations. To limit the scope of this example, I am simply adding data from the Hygraph API playground. In an ideal world, a create/update mutation would be triggered from the front end after receiving user input.

Run this mutation in the Hygraph API playground:

mutation createLocalizedPost {
  createLocalizedPost(
    data: {
      title: "A Journey Through the Alps", 
      body: "Exploring the majestic mountains of the Alps offers a thrilling experience. The stunning landscapes, diverse wildlife, and pristine environment make it a perfect destination for nature lovers.", 
      localizations: {
        create: [
          {locale: fr, data: {title: "Un voyage à travers les Alpes", body: "Explorer les majestueuses montagnes des Alpes offre une expérience palpitante. Les paysages époustouflants, la faune diversifiée et l'environnement immaculé en font une destination parfaite pour les amoureux de la nature."}}
        ]
      }
    }
  ) {
    id
  }
}

The mutation above creates a post with the en locale and includes a fr version of the same post. Feel free to add more data to your model if you want to see things work from a broader set of data.

Putting Things Together

Now that we have Hygraph API content ready for consumption let’s take a moment to understand how it’s consumed inside the Nuxt app.

To do this, we’ll install nuxt-graphql-client to serve as the app’s GraphQL client. This is a minimal GraphQL client for performing GraphQL operations without having to worry about complex configurations, code generation, typing, and other setup tasks.

npx nuxi@latest module add graphql-client
// nuxt.config.ts
export default defineNuxtConfig({
  modules: [
    // ...
    "nuxt-graphql-client"
    // ...
  ],
  runtimeConfig: {
    public: {
      GQL_HOST: 'ADD_YOUR_GQL_HOST_URL_HERE_OR_IN_.env'
    }
  },
});

Next, let's add our GraphQL queries in graphql/queries.graphql.

query getPosts($locale: [Locale!]!) {
  localizedPosts(locales: $locale) {
    title
    body
  }
}

The GraphQL client will automatically scan .graphql and .gql files and generate client-side code and typings in the .nuxt/gql folder. All we need to do is stop and restart the Nuxt application. After restarting the app, the GraphQL client will allow us to use a GqlGetPosts function to trigger the query.

Now, we will build the Blog page where by querying the Hygraph server and showing the dynamic data.

// pages/blog.vue
<script lang="ts" setup>
  import type { GetPostsQueryVariables } from "#gql";
  import type { PostItem, Locale } from "../types/types";

  const { locale } = useI18n();
  const posts = ref<PostItem[]>([]);
  const isLoading = ref(false);
  const isError = ref(false);

  const fetchPosts = async (localeValue: Locale) => {
    try {
      isLoading.value = true;
      const variables: GetPostsQueryVariables = {
        locale: [localeValue],
      };
      const data = await GqlGetPosts(variables);
      posts.value = data?.localizedPosts ?? [];
    } catch (err) {
      console.log("Fetch Error, Something went wrong", err);
      isError.value = true;
    } finally {
      isLoading.value = false;
    }
  };

  // Fetch posts on component mount
  onMounted(() => {
    fetchPosts(locale.value as Locale);
  });

  // Watch for locale changes
  watch(locale, (newLocale) => {
    fetchPosts(newLocale as Locale);
  });
</script>

This code fetches only the current locale from the useI18n hook and sends it to the fetchPosts function when the Vue component is mounted. The fetchPosts function will pass the locale to the GraphQL query as a variable and obtain localized data from the Hygraph server. We also have a watcher on the locale so that whenever the global locale is changed by the user we make an API call to the server again and fetch posts in that locale.

And, finally, let’s add markup for viewing our fetched data!

<!-- pages/blog.vue -->
<template>
  <v-container fluid>
    <v-card-title class="text-overline">Blogs</v-card-title>
    <div v-if="isLoading">
      <v-skeleton-loader type="card" v-for="n in 2" :key="n" class="mb-4" />
    </div>
    <div v-else-if="isError">
      <p>Something went wrong while getting blogs please check the logs.</p>
    </div>
    <div v-else>
      <div
        v-for="(post, index) in posts"
        :key="post.title || index"
        class="mb-4"
      >
        <v-card color="cardBackground">
          <v-card-title class="text-h6">{{ post.title }}</v-card-title>
          <v-card-text>{{ post.body }}</v-card-text>
        </v-card>
      </div>
    </div>
  </v-container>
</template>

Awesome! If all goes according to plan, then your app should look something like the one in the following video.

Wrapping Up

Check that out — we just made the functionality for translating content for a multilingual website! Now, a user can select a locale from a list of options, and the app fetches content for the selected locale and automatically updates the displayed content.

Did you think that translations would require more difficult steps? It’s pretty amazing that we’re able to cobble together a couple of libraries, hook them up to an API, and wire everything up to render on a page.

Of course, there are other libraries and resources for handling internationalization in a multilingual context. The exact tooling is less the point than it is seeing what pieces are needed to handle dynamic translations and how they come together.

]]>
hello@smashingmagazine.com (Tim Benniks)
<![CDATA[Sweet Nostalgia In August (2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/07/desktop-wallpaper-calendars-august-2024/ https://smashingmagazine.com/2024/07/desktop-wallpaper-calendars-august-2024/ Wed, 31 Jul 2024 09:30:00 GMT Everybody loves a beautiful wallpaper to freshen up their desktops and home screens, right? To cater for new and unique artworks on a regular basis, we started our monthly wallpapers series more than 13 years ago, and from the very beginning to today, artists and designers from across the globe have accepted the challenge and submitted their designs to it. Just like this month.

In this post, you’ll find their wallpaper designs for August 2024. All of them come in versions with and without a calendar and can be downloaded for free. As a little bonus goodie, we also added a selection of August favorites from our wallpapers archives that are just too good to be forgotten. A big thank-you to everyone who shared their designs with us this month! Happy August!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Nostalgia

“August, the final breath of summer, brings with it a wistful nostalgia for a season not yet past.” — Designed by Ami Totorean from Romania.

Relax In Bora-Bora

“As we have taken a liking to diving through the coral reefs, we’ll also spend August diving and took the leap to Bora-Bora. There we enjoy the sea and nature and above all, we rest to gain strength for the new course that is to come.” — Designed by Veronica Valenzuela from Spain.

Sandcastle Day

“Join us on Sandcastle Day for a fun-filled beach adventure, where creativity meets the sand — build, play, and enjoy the sun!” — Designed by PopArt Studio from Serbia.

Banana!

Designed by Ricardo Gimenes from Spain.

Cullion

Designed by Bhabna Basak from India.

Pirate Aged Rum

Designed by Ricardo Gimenes from Spain.

World Friendship Day

“Cherish the bonds of friendship, share smiles, and create beautiful memories with your friends. Let’s spread love and joy together!” — Designed by Reethu from London.

Summer Day

Designed by Kasturi Palmal from India.

Retro Road Trip

“As the sun dips below the horizon, casting a warm glow upon the open road, the retro van finds a resting place for the night. A campsite bathed in moonlight or a cozy motel straight from a postcard become havens where weary travelers can rest, rejuvenate, and prepare for the adventures that await with the dawn of a new day.” — Designed by PopArt Studio from Serbia.

Spooky Campfire Stories

Designed by Ricardo Gimenes from Spain.

Happiness Happens In August

“Many people find August one of the happiest months of the year because of holidays. You can spend days sunbathing, swimming, birdwatching, listening to their joyful chirping, and indulging in sheer summer bliss. August 8th is also known as the Happiness Happens Day, so make it worthwhile.” — Designed by PopArt Studio from Serbia.

Bee Happy!

“August means that fall is just around the corner, so I designed this wallpaper to remind everyone to ‘bee happy’ even though summer is almost over. Sweeter things are ahead!” — Designed by Emily Haines from the United States.

Colorful Summer

“‘Always keep mint on your windowsill in August, to ensure that the buzzing flies will stay outside where they belong. Don’t think summer is over, even when roses droop and turn brown and the stars shift position in the sky. Never presume August is a safe or reliable time of the year.’ (Alice Hoffman)” — Designed by Lívi from Hungary.

Psst, It’s Camping Time…

“August is one of my favorite months, when the nights are long and deep and crackling fire makes you think of many things at once and nothing at all at the same time. It’s about heat and cold which allow you to touch the eternity for a few moments.” — Designed by Igor Izhik from Canada.

Oh La La… Paris’ Night

“I like the Paris’ night! All is very bright!” — Designed by Verónica Valenzuela from Spain.

Summer Nap

Designed by Dorvan Davoudi from Canada.

Shrimp Party

“A nice summer shrimp party!” — Designed by Pedro Rolo from Portugal.

Cowabunga

Designed by Ricardo Gimenes from Spain.

Childhood Memories

Designed by Francesco Paratici from Australia.

Live In The Moment

“My dog Sami inspired me for this one. He lives in the moment and enjoys every second with a big smile on his face. I wish we could learn to enjoy life like he does! Happy August everyone!” — Designed by Westie Vibes from Portugal.

Swimming In The Summer

“It’s the perfect evening and the water is so warm! Can you feel it? You move your legs just a little bit and you feel the water bubbles dancing around you! It’s just you in there, floating in the clean lake and small sparkly lights shining above you! It’s a wonderful feeling, isn’t it?” — Designed by Creative Pinky from the Netherlands.

It’s Vacation O’Clock!

“It’s vacation o’clock! Or is it? While we bend our backs in front of a screen, it’s hard not to think about sandy beaches, flipping the pages of a corny book under the umbrella while waves splash continuously. Summer days! So hard to bear them in the city, so pleasant when you’re living the dolce far niente.” — Designed by ActiveCollab from the United States.

Launch

“The warm, clear summer nights make me notice the stars more — that’s what inspired this space-themed design!” — Designed by James Mitchell from the United Kingdom.

Ahoy

Designed by Webshift 2.0 from South Africa.

Rain, Rain Go Away!

“Remember the nursery rhyme where the little boy pleads the rain to go away? It is one of the most pleasant and beautiful seasons when the whole universe buckles up to dance to the rhythm of the drizzles that splash across the earth. And, it is August, the time of the year when monsoons add a lot of color and beauty to nature. We welcome everyone to enjoy the awesomeness of monsoons.” — Designed by Acodez from India.

Olympic Summer

“The Summer Olympic Games promise two weeks of superhuman struggle for eternal glory. Support your favorites and enjoy hot August.” — Designed by PopArt Studio from Serbia.

Handwritten August

“I love typography handwritten style.” — Designed by Chalermkiat Oncharoen from Thailand.

Hello Again

“In Melbourne it is the last month of quite a cool winter so we are looking forward to some warmer days to come.” — Designed by Tazi from Australia.

El Pollo Pepe

“Summer is beach and swimming pool, but it means countryside, too. We enjoy those summer afternoons with our friend ‘El pollo Pepe’. Happy summer!” — Designed by Veronica Valenzuela from Spain.

Coffee Break Time

Designed by Ricardo Gimenes from Spain.

Subtle August Chamomiles

“Our designers wanted to create something summery, but not very colorful, something more subtle. The first thing that came to mind was chamomile because there are a lot of them in Ukraine and their smell is associated with a summer field.” — Designed by MasterBundles from Ukraine.

Work Hard, Play Hard

“It seems the feeling of summer breaks we had back in school never leaves us. The mere thought of alarm clocks feels wrong in the summer, especially if you’ve recently come back from a trip to the seaside. So, we try to do our best during working hours and then compensate with fun activities and plenty of rest. Cheers!” — Designed by ActiveCollab from the United States.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Rethinking The Role Of Your UX Teams And Move Beyond Firefighting]]> https://smashingmagazine.com/2024/07/rethinking-role-ux-teams-mov-beyond-firefighting/ https://smashingmagazine.com/2024/07/rethinking-role-ux-teams-mov-beyond-firefighting/ Mon, 29 Jul 2024 18:00:00 GMT In my experience of building and supporting UX teams, most of them are significantly under-resourced. In fact, the term "team" can often be a stretch, with many user experience professionals finding themselves alone in their roles.

Typically, there are way more projects that impact the user experience than the team can realistically work on. Consequently, most UX teams are in a constant state of firefighting and achieve relatively little in improving the overall experience.

We can complain about being under-resourced as much as we want, but the truth is that our teams are unlikely to grow to a size where we have sufficient staff to address every detail of the experience. Therefore, in this post, I want to step back and reconsider the role of user experience professionals and how UX teams can best improve the user experience of an organization.

What Is The Role Of A UX Professional?

There is a danger that as UX professionals, we focus too much on the tools of our trade rather than the desired outcome.

In other words, we tend to think that our role involves activities such as:

  • Prototyping
  • User research
  • Interface design
  • Testing with users

But these are merely the means to an end, not the end goal itself. These activities are also time-consuming and resource-intensive, potentially monopolizing the attention of a small UX team.

Our true role is to improve the user experience as they interact with our organization's digital channels.

The ultimate goal for a UX team should be to tangibly enhance the customer experience, rather than solely focusing on executing design artifacts.

This reframing of our role opens up new possibilities for how we can best serve our organizations and their customers. Instead of solely focusing on the tactical activities of UX, we must proactively identify the most impactful opportunities to enhance the overall customer experience.

Changing How We Approach Our Role

If our goal is to elevate the customer experience, rather than solely executing UX activities, we need to change how we approach our role, especially in under-resourced teams.

To maximize our impact, we must shift from a tactical, project-based mindset to a more strategic, leadership-oriented one.

We need to become experience evangelists who can influence the broader organization and inspire others to prioritize and champion user experience improvements across the business.

As I help shape UX teams in organizations, I achieve this by focusing on four critical areas:

Let’s explore these in turn.

The Creation Of Resources

It is important for any UX team to demonstrate its value to the organization. One way to achieve this is by creating a set of tangible resources that can be utilized by others throughout the organization.

Therefore, when creating a new UX team, I initially focus on establishing a core set of resources that provide value and leave an impressive impression.

Some of the resources I typically focus on producing include:

  • User Experience Playbook
    An online learning resource featuring articles, guides, and cheatsheets that cover topics ranging from conducting surveys to performing AB testing.
  • Design System
    A set of user interface components that can be used by teams to quickly prototype ideas and fast-track their development projects.
  • Recommended Supplier List
    A list of UX specialists that have been vetted by the team, so departments can be confident in hiring them if they want help improving the user experience.
  • User Research Assets
    A collection of personas, journey maps, and data on user behavior for each of the most common audiences that the organization interacts with.

These resources need to be viewed as living services that your UX team supports and refines over time. Note as well that these resources include educational elements. The importance of education and training cannot be overstated.

The Provision Of Training

By providing training and educational resources, your UX team can empower and upskill the broader organization, enabling them to better prioritize and champion user experience improvements. This approach effectively extends the team’s reach beyond its limited internal headcount, seeking to turn everybody into user experience practitioners.

This training provision should include a blend of 'live' learning and self-learning materials, with a greater focus on the latter since it can be created once and updated periodically.

Most of the self-learning content will be integrated into the playbook and will either be custom-created by your UX team (when specific to your organization) or purchased (when more generic).

In addition to this self-learning content, the team can also offer longer workshops, lunchtime inspirational presentations, and possibly even in-house conferences.

Of course, the devil can be in the details when it comes to the user experience, so colleagues across the organization will also need individual support.

The Offering Of Consultative Services

Although your UX team may not have the capacity to work directly on every customer experience initiative, you can provide consultative services to guide and support other teams. This strategic approach enables your UX team to have a more significant impact by empowering and upskilling the broader organization, rather than solely concentrating on executing design artifacts.

Services I tend to offer include:

  • UX reviews
    A chance for those running digital services to ask a UX professional to review their existing services and identify areas for improvement.
  • UX discovery
    A chance for those considering developing a digital service to get it assessed based on whether there is a user need.
  • Workshop facilitation
    Your UX team could offer a range of UX workshops to help colleagues understand user needs better or formulate project ideas through design thinking.
  • Consultancy clinics
    Regular timeslots where those with questions about UX can “drop in” and talk with a UX expert.

But it is important that your UX team limits their involvement and resists the urge to get deeply involved in the execution of every project. Their role is to be an advisor, not an implementer.

Through the provision of these consultative services, your UX team will start identifying individuals across the organization who value user experience and recognize its importance to some degree. The ultimate goal is to transform these individuals into advocates for UX, a process that can be facilitated by establishing a UX community within your organization.

Building A UX Community

Building a UX community within the organization can amplify the impact of your UX team's efforts and create a cohesive culture focused on customer experience. This community can serve as a network of champions and advocates for user experience, helping spread awareness and best practices throughout the organization.

Begin by creating a mailing list or a Teams/Slack channel. Using these platforms, your UX team can exchange best practices, tips, and success stories. Additionally, you can interact with the community by posing questions, creating challenges, and organizing group activities.

For example, your UX team could facilitate the creation of design principles by the community, which could then be promoted organization-wide. The team could also nurture a sense of friendly competition by encouraging community members to rate their digital services against the System Usability Scale or another metric.

The goal is to keep UX advocates engaged and advocating for UX within their teams, with a continual focus on growing the group and bringing more people into the fold.

Finally, this community can be rewarded for their contributions. For example, they could have priority access to services or early access to educational programs. Anything to make them feel like they are a part of something special.

An Approach Not Without Its Challenges

I understand that many of my suggestions may seem unattainable. Undoubtedly, you are deeply immersed in day-to-day project tasks and troubleshooting. I acknowledge that it is much easier to establish this model when starting from a blank canvas. However, it is possible to transition an existing UX team from tactical project work to UX leadership.

The key to success lies in establishing a new, clear mandate for the group, rather than having it defined by past expectations. This new mandate needs to be supported by senior management, which means securing their buy-in and understanding of the broader value that user experience can provide to the organization.

I tend to approach this by suggesting that your UX team be redefined as a center of excellence (CoE). A CoE refers to a team or department that develops specialized expertise in a particular area and then disseminates that knowledge throughout the organization.

This term is familiar to management and helps shift management and colleague thinking away from viewing the team as UX implementors to a leadership role. Alongside this new definition, I also seek to establish new objectives and key performance indicators with management.

These new objectives should focus on education and empowerment, not implementation. When it comes to key performance indicators, they should revolve around the organization's understanding of UX, overall user satisfaction, and productivity metrics, rather than the success or failure of individual projects.

It is not an easy shift to make, but if you do it successfully, your UX team can evolve into a powerful force for driving customer-centric innovation throughout the organization.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Integrating Image-To-Text And Text-To-Speech Models (Part 1)]]> https://smashingmagazine.com/2024/07/integrating-image-to-text-and-text-to-speech-models-part1/ https://smashingmagazine.com/2024/07/integrating-image-to-text-and-text-to-speech-models-part1/ Wed, 24 Jul 2024 10:00:00 GMT Audio descriptions involve narrating contextual visual information in images or videos, improving user experiences, especially for those who rely on audio cues.

At the core of audio description technology are two crucial components: the description and the audio. The description involves understanding and interpreting the visual content of an image or video, which includes details such as actions, settings, expressions, and any other relevant visual information. Meanwhile, the audio component converts these descriptions into spoken words that are clear, coherent, and natural-sounding.

So, here’s something we can do: build an app that generates and announces audio descriptions. The app can integrate a pre-trained vision-language model to analyze image inputs, extract relevant information, and generate accurate descriptions. These descriptions are then converted into speech using text-to-speech technology, providing a seamless and engaging audio experience.

By the end of this tutorial, you will gain a solid grasp of the components that are used to build audio description tools. We’ll spend time discussing what VLM and TTS models are, as well as many examples of them and tooling for integrating them into your work.

When we finish, you will be ready to follow along with a second tutorial in which we level up and build a chatbot assistant that you can interact with to get more insights about your images or videos.

Vision-Language Models: An Introduction

VLMs are a form of artificial intelligence that can understand and learn from visuals and linguistic modalities.

They are trained on vast amounts of data that include images, videos, and text, allowing them to learn patterns and relationships between these modalities. In simple terms, a VLM can look at an image or video and generate a corresponding text description that accurately matches the visual content.

VLMs typically consist of three main components:

  1. An image model that extracts meaningful visual information,
  2. A text model that processes and understands natural language,
  3. A fusion mechanism that combines the representations learned by the image and text models, enabling cross-modal interactions.

Generally speaking, the image model — also known as the vision encoder — extracts visual features from input images and maps them to the language model’s input space, creating visual tokens. The text model then processes and understands natural language by generating text embeddings. Lastly, these visual and textual representations are combined through the fusion mechanism, allowing the model to integrate visual and textual information.

VLMs bring a new level of intelligence to applications by bridging visual and linguistic understanding. Here are some of the applications where VLMs shine:

  • Image captions: VLMs can provide automatic descriptions that enrich user experiences, improve searchability, and even enhance visuals for vision impairments.
  • Visual answers to questions: VLMs could be integrated into educational tools to help students learn more deeply by allowing them to ask questions about visuals they encounter in learning materials, such as complex diagrams and illustrations.
  • Document analysis: VLMs can streamline document review processes, identifying critical information in contracts, reports, or patents much faster than reviewing them manually.
  • Image search: VLMs could open up the ability to perform reverse image searches. For example, an e-commerce site might allow users to upload image files that are processed to identify similar products that are available for purchase.
  • Content moderation: Social media platforms could benefit from VLMs by identifying and removing harmful or sensitive content automatically before publishing it.
  • Robotics: In industrial settings, robots equipped with VLMs can perform quality control tasks by understanding visual cues and describing defects accurately.

This is merely an overview of what VLMs are and the pieces that come together to generate audio descriptions. To get a clearer idea of how VLMs work, let’s look at a few real-world examples that leverage VLM processes.

VLM Examples

Based on the use cases we covered alone, you can probably imagine that VLMs come in many forms, each with its unique strengths and applications. In this section, we will look at a few examples of VLMs that can be used for a variety of different purposes.

IDEFICS

IDEFICS is an open-access model inspired by Deepmind’s Flamingo, designed to understand and generate text from images and text inputs. It’s similar to OpenAI’s GPT-4 model in its multimodal capabilities but is built entirely from publicly available data and models.

IDEFICS is trained on public data and models — like LLama V1 and Open Clip — and comes in two versions: the base and instructed versions, each available in 9 billion and 80 billion parameter sizes.

The model combines two pre-trained unimodal models (for vision and language) with newly added Transformer blocks that allow it to bridge the gap between understanding images and text. It’s trained on a mix of image-text pairs and multimodal web documents, enabling it to handle a wide range of visual and linguistic tasks. As a result, IDEFICS can answer questions about images, provide detailed descriptions of visual content, generate stories based on a series of images, and function as a pure language model when no visual input is provided.

PaliGemma

PaliGemma is an advanced VLM that draws inspiration from PaLI-3 and leverages open-source components like the SigLIP vision model and the Gemma language model.

Designed to process both images and textual input, PaliGemma excels at generating descriptive text in multiple languages. Its capabilities extend to a variety of tasks, including image captioning, answering questions from visuals, reading text, detecting subjects in images, and segmenting objects displayed in images.

The core architecture of PaliGemma includes a Transformer decoder paired with a Vision Transformer image encoder that boasts an impressive 3 billion parameters. The text decoder is derived from Gemma-2B, while the image encoder is based on SigLIP-So400m/14.

Through training methods similar to PaLI-3, PaliGemma achieves exceptional performance across numerous vision-language challenges.

PaliGemma is offered in two distinct sets:

  • General Purpose Models (PaliGemma): These pre-trained models are designed for fine-tuning a wide array of tasks, making them ideal for practical applications.
  • Research-Oriented Models (PaliGemma-FT): Fine-tuned on specific research datasets, these models are tailored for deep research on a range of topics.

Phi-3-Vision-128K-Instruct

The Phi-3-Vision-128K-Instruct model is a Microsoft-backed venture that combines text and vision capabilities. It’s built on a dataset of high-quality, reasoning-dense data from both text and visual sources. Part of the Phi-3 family, the model has a context length of 128K, making it suitable for a range of applications.

You might decide to use Phi-3-Vision-128K-Instruct in cases where your application has limited memory and computing power, thanks to its relatively lightweight that helps with latency. The model works best for generally understanding images, recognizing characters in text, and describing charts and tables.

Yi Vision Language (Yi-VL)

Yi-VL is an open-source AI model developed by 01-ai that can have multi-round conversations with images by reading text from images and translating it. This model is part of the Yi LLM series and has two versions: 6B and 34B.

What distinguishes Yi-VL from other models is its ability to carry a conversation, whereas other models are typically limited to a single text input. Plus, it’s bilingual making it more versatile in a variety of language contexts.

Finding And Evaluating VLMs

There are many, many VLMs and we only looked at a few of the most notable offerings. As you commence work on an application with image-to-text capabilities, you may find yourself wondering where to look for VLM options and how to compare them.

There are two resources in the Hugging Face community you might consider using to help you find and compare VLMs. I use these regularly and find them incredibly useful in my work.

Vision Arena

Vision Arena is a leaderboard that ranks VLMs based on anonymous user voting and reviews. But what makes it great is the fact that you can compare any two models side-by-side for yourself to find the best fit for your application.

And when you compare two models, you can contribute your own anonymous votes and reviews for others to lean on as well.

OpenVLM Leaderboard

OpenVLM is another leaderboard hosted on Hugging Face for getting technical specs on different models. What I like about this resource is the wealth of metrics for evaluating VLMs, including the speed and accuracy of a given VLM.

Further, OpenVLM lets you filter models by size, type of license, and other ranking criteria. I find it particularly useful for finding VLMs I might have overlooked or new ones I haven’t seen yet.

Text-To-Speech Technology

Earlier, I mentioned that the app we are about to build will use vision-language models to generate written descriptions of images, which are then read aloud. The technology that handles converting text to audio speech is known as text-to-speech synthesis or simply text-to-speech (TTS).

TTS converts written text into synthesized speech that sounds natural. The goal is to take published content, like a blog post, and read it out loud in a realistic-sounding human voice.

So, how does TTS work? First, it breaks down text into the smallest units of sound, called phonemes, and this process allows the system to figure out proper word pronunciations. Next, AI enters the mix, including deep learning algorithms trained on hours of human speech data. This is how we get the app to mimic human speech patterns, tones, and rhythms — all the things that make for “natural” speech. The AI component is key as it elevates a voice from robotic to something with personality. Finally, the system combines the phoneme information with the AI-powered digital voice to render the fully expressive speech output.

The result is automatically generated speech that sounds fairly smooth and natural. Modern TTS systems are extremely advanced in that they can replicate different tones and voice inflections, work across languages, and understand context. This naturalness makes TTS ideal for humanizing interactions with technology, like having your device read text messages out loud to you, just like Apple’s Siri or Microsoft’s Cortana.

TTS Examples

Based on the use cases we covered alone, you can probably imagine that VLMs come in many forms, each with its unique strengths and applications. In this section, we will look at a few examples of VLMs that can be used for a variety of different purposes.

Just as we took a moment to review existing vision language models, let’s pause to consider some of the more popular TTS resources that are available.

Bark

Straight from Bark’s model card in Hugging Face:

“Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio — including music, background noise, and simple sound effects. The model can also produce nonverbal communication, like laughing, sighing, and crying. To support the research community, we are providing access to pre-trained model checkpoints ready for inference.”

The non-verbal communication cues are particularly interesting and a distinguishing feature of Bark. Check out the various things Bark can do to communicate emotion, pulled directly from the model’s GitHub repo:

  • [laughter]
  • [laughs]
  • [sighs]
  • [music]
  • [gasps]
  • [clears throat]

This could be cool or creepy, depending on how it’s used, but reflects the sophistication we’re working with. In addition to laughing and gasping, Bark is different in that it doesn’t work with phonemes like a typical TTS model:

“It is not a conventional TTS model but instead a fully generative text-to-audio model capable of deviating in unexpected ways from any given script. Different from previous approaches, the input text prompt is converted directly to audio without the intermediate use of phonemes. It can, therefore, generalize to arbitrary instructions beyond speech, such as music lyrics, sound effects, or other non-speech sounds.”

Coqui

Coqui/XTTS-v2 can clone voices in different languages. All it needs for training is a short six-second clip of audio. This means the model can be used to translate audio snippets from one language into another while maintaining the same voice.

At the time of writing, Coqui currently supports 16 languages, including English, Spanish, French, German, Italian, Portuguese, Polish, Turkish, Russian, Dutch, Czech, Arabic, Chinese, Japanese, Hungarian, and Korean.

Parler-TTS

Parler-TTS excels at generating high-quality, natural-sounding speech in the style of a given speaker. In other words, it replicates a person’s voice. This is where many folks might draw an ethical line because techniques like this can be used to essentially imitate a real person, even without their consent, in a process known as “deepfake” and the consequences can range from benign impersonations to full-on phishing attacks.

But that’s not really the aim of Parler-TTS. Rather, it’s good in contexts that require personalized and natural-sounding speech generation, such as voice assistants and possibly even accessibility tooling to aid visual impairments by announcing content.

TTS Arena Leaderboard

Do you know how I shared the OpenVLM Leaderboard for finding and comparing vision language models? Well, there’s an equivalent leadership for TTS models as well over at the Hugging Face community called TTS Arena.

TTS models are ranked by the “naturalness” of their voices, with the most natural-sounding models ranked first. Developers like you and me vote and provide feedback that influences the rankings.

TTS API Providers

What we just looked at are TTS models that are baked into whatever app we’re making. However, some models are consumable via API, so it’s possible to get the benefits of a TTS model without the added bloat if a particular model is made available by an API provider.

Whether you decide to bundle TTS models in your app or integrate them via APIs is totally up to you. There is no right answer as far as saying one method is better than another — it’s more about the app’s requirements and whether the dependability of a baked-in model is worth the memory hit or vice-versa.

All that being said, I want to call out a handful of TTS API providers for you to keep in your back pocket.

ElevenLabs

ElevenLabs offers a TTS API that uses neural networks to make voices sound natural. Voices can be customized for different languages and accents, leading to realistic, engaging voices.

Try the model out for yourself on the ElevenLabs site. You can enter a block of text and choose from a wide variety of voices that read the submitted text aloud.

Colossyan

Colossyan’s text-to-speech API converts text into natural-sounding voice recordings in over 70 languages and accents. From there, the service allows you to match the audio to an avatar to produce something like a complete virtual presentation based on your voice — or someone else’s.

Once again, this is encroaching on deepfake territory, but it’s really interesting to think of Colossyan’s service as a virtual casting call for actors to perform off a script.

Murf.ai

Murf.ai is yet another TTS API designed to generate voiceovers based on real human voices. The service provides a slew of premade voices you can use to generate audio for anything from explainer videos and audiobooks to course lectures and entire podcast episodes.

Amazon Polly

Amazon has its own TTS API called Polly. You can customize the voices using lexicons and Speech Synthesis Markup (SSML) tags for establishing speaking styles with affordances for adjusting things like pitch, speed, and volume.

PlayHT

The PlayHT TTS API generates speech in 142 languages. Type what you want it to say, pick a voice, and download the output as an MP3 or WAV file.

Demo: Building An Image-to-Audio Interface

So far, we have discussed the two primary components for generating audio from text: vision-language models and text-to-speech models. We’ve covered what they are, where they fit into the process of generating real-sounding speech, and various examples of each model.

Now, it’s time to apply those concepts to the app we are building in this tutorial (and will improve in a second tutorial). We will use a VLM so the app can glean meaning and context from images, a TTS model to generate speech that mimics a human voice, and then integrate our work into a user interface for submitting images that will lead to generated speech output.

I have decided to base our work on a VLM by Salesforce called BLIP, a TTS model from Kakao Enterprise called VITS, and Gradio as a framework for the design interface. I’ve covered Gradio extensively in other articles, but the gist is that it is a Python library for building web interfaces — only it offers built-in tools for working with machine learning models that make Gradio ideal for a tutorial like this.

You can use completely different models if you like. The whole point is less about the intricacies of a particular model than it is to demonstrate how the pieces generally come together.

Oh, and one more detail worth noting: I am working with the code for all of this in Google Collab. I’m using it because it’s hosted and ideal for demonstrations like this. But you can certainly work in a more traditional IDE, like VS Code.

Installing Libraries

First, we need to install the necessary libraries:

#python
!pip install gradio pillow transformers scipy numpy

We can upgrade the transformers library to the latest version if we need to:

#python
!pip install --upgrade transformers

Not sure if you need to upgrade? Here’s how to check the current version:

#python
import transformers
print(transformers.__version__)

OK, now we are ready to import the libraries:

#python
import gradio as gr
from PIL import Image
from transformers import pipeline
import scipy.io.wavfile as wavfile
import numpy as np

These libraries will help us process images, use models on the Hugging Face hub, handle audio files, and build the UI.

Creating Pipelines

Since we will pull our models directly from Hugging Face’s model hub, we can tap into them using pipelines. This way, we’re working with an API for tasks that involve natural language processing and computer vision without carrying the load in the app itself.

We set up our pipeline like this:

#python
caption_image = pipeline("image-to-text", model="Salesforce/blip-image-captioning-large")

This establishes a pipeline for us to access BLIP for converting images into textual descriptions. Again, you could establish a pipeline for any other model in the Hugging Face hub.

We’ll need a pipeline connected to our TTS model as well:

#python
Narrator = pipeline("text-to-speech", model="kakao-enterprise/vits-ljs")

Now, we have a pipeline where we can pass our image text to be converted into natural-sounding speech.

Converting Text to Speech

What we need now is a function that handles the audio conversion. Your code will differ depending on the TTS model in use, but here is how I approached the conversion based on the VITS model:

#python

def generate_audio(text):
  # Generate speech from the input text using the Narrator (VITS model)
  Narrated_Text = Narrator(text)

  # Extract the audio data and sampling rate
  audio_data = np.array(Narrated_Text["audio"][0])
  sampling_rate = Narrated_Text["sampling_rate"]

  # Save the generated speech as a WAV file
  wavfile.write("generated_audio.wav", rate=sampling_rate, data=audio_data)

  # Return the filename of the saved audio file
  return "generated_audio.wav"

That’s great, but we need to make sure there’s a bridge that connects the text that the app generates from an image to the speech conversion. We can write a function that uses BLIP to generate the text and then calls the generate_audio() function we just defined:

#python
def caption_my_image(pil_image):
  # Use BLIP to generate a text description of the input image
  semantics = caption_image(images=pil_image)[0]["generated_text"]

  # Generate audio from the text description
  return generate_audio(semantics)

Building The User Interface

Our app would be pretty useless if there was no way to interact with it. This is where Gradio comes in. We will use it to create a form that accepts an image file as an input and then outputs the generated text for display as well as the corresponding file containing the speech.

#python

main_tab = gr.Interface(
  fn=caption_my_image,
  inputs=[gr.Image(label="Select Image", type="pil")],
  outputs=[gr.Audio(label="Generated Audio")],
  title=" Image Audio Description App",
  description="This application provides audio descriptions for images."
)

# Information tab
info_tab = gr.Markdown("""
  # Image Audio Description App
  ### Purpose
  This application is designed to assist visually impaired users by providing audio descriptions of images. It can also be used in various scenarios such as creating audio captions for educational materials, enhancing accessibility for digital content, and more.

  ### Limits
  - The quality of the description depends on the image clarity and content.
  - The application might not work well with images that have complex scenes or unclear subjects.
  - Audio generation time may vary depending on the input image size and content.
  ### Note
  - Ensure the uploaded image is clear and well-defined for the best results.
  - This app is a prototype and may have limitations in real-world applications.
""")

# Combine both tabs into a single app 
 demo = gr.TabbedInterface(
  [main_tab, info_tab],
  tab_names=["Main", "Information"]
)

demo.launch()

The interface is quite plain and simple, but that’s OK since our work is purely for demonstration purposes. You can always add to this for your own needs. The important thing is that you now have a working application you can interact with.

At this point, you could run the app and try it in Google Collab. You also have the option to deploy your app, though you’ll need hosting for it. Hugging Face also has a feature called Spaces that you can use to deploy your work and run it without Google Collab. There’s even a guide you can use to set up your own Space.

Here’s the final app that you can try by uploading your own photo:

Coming Up…

We covered a lot of ground in this tutorial! In addition to learning about VLMs and TTS models at a high level, we looked at different examples of them and then covered how to find and compare models.

But the rubber really met the road when we started work on our app. Together, we made a useful tool that generates text from an image file and then sends that text to a TTS model to convert it into speech that is announced out loud and downloadable as either an MP3 or WAV file.

But we’re not done just yet! What if we could glean even more detailed information from images and our app not only describes the images but can also carry on a conversation about them?

Sounds exciting, right? This is exactly what we’ll do in the second part of this tutorial.

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[Getting To The Bottom Of Minimum WCAG-Conformant Interactive Element Size]]> https://smashingmagazine.com/2024/07/getting-bottom-minimum-wcag-conformant-interactive-element-size/ https://smashingmagazine.com/2024/07/getting-bottom-minimum-wcag-conformant-interactive-element-size/ Fri, 19 Jul 2024 13:00:00 GMT There are many rumors and misconceptions about conforming to WCAG criteria for the minimum sizing of interactive elements. I’d like to use this post to demystify what is needed for baseline compliance and to point out an approach for making successful and inclusive interactive experiences using ample target sizes.

Minimum Conformant Pixel Size

Getting right to it: When it comes to pure Web Content Accessibility Guidelines (WCAG) conformance, the bare minimum pixel size for an interactive, non-inline element is 24×24 pixels. This is outlined in Success Criterion 2.5.8: Target Size (Minimum).

Success Criterion 2.5.8 is level AA, which is the most commonly used level for public, mass-consumed websites. This Success Criterion (or SC for short) is sometimes confused for SC 2.5.5 Target Size (Enhanced), which is level AAA. The two are distinct and provide separate guidance for properly sizing interactive elements, even if they appear similar at first glance.

SC 2.5.8 is relatively new to WCAG, having been released as part of WCAG version 2.2, which was published on October 5th, 2023. WCAG 2.2 is the most current version of the standard, but this newer release date means that knowledge of its existence isn’t as widespread as the older SC, especially outside of web accessibility circles. That said, WCAG 2.2 will remain the standard until WCAG 3.0 is released, something that is likely going to take 10–15 years or more to happen.

SC 2.5.5 calls for larger interactive elements sizes that are at least 44×44 pixels (compared to the SC 2.5.8 requirement of 24×24 pixels). At the same time, notice that SC 2.5.5 is level AAA (compared to SC 2.5.8, level AA) which is a level reserved for specialized support beyond level AA.

Sites that need to be fully WCAG Level AAA conformant are rare. Chances are that if you are making a website or web app, you’ll only need to support level AA. Level AAA is often reserved for large or highly specialized institutions.

Making Interactive Elements Larger With CSS Padding

The family of padding-related properties in CSS can be used to extend the interactive area of an element to make it conformant. For example, declaring padding: 4px; on an element that measures 16×16 pixels invisibly increases its bounding box to a total of 24×24 pixels. This, in turn, means the interactive element satisfies SC 2.5.8.

This is a good trick for making smaller interactive elements easier to click and tap. If you want more information about this sort of thing, I enthusiastically recommend Ahmad Shadeed’s post, “Designing better target sizes”.

I think it’s also worth noting that CSS margin could also hypothetically be used to achieve level AA conformance since the SC includes a spacing exception:

The size of the target for pointer inputs is at least 24×24 CSS pixels, except where:

Spacing: Undersized targets (those less than 24×24 CSS pixels) are positioned so that if a 24 CSS pixel diameter circle is centered on the bounding box of each, the circles do not intersect another target or the circle for another undersized target;

[…]

The difference here is that padding extends the interactive area, while margin does not. Through this lens, you’ll want to honor the spirit of the success criterion because partial conformance is adversarial conformance. At the end of the day, we want to help people successfully click or tap interactive elements, such as buttons.

What About Inline Interactive Elements?

We tend to think of targets in terms of block elements — elements that are displayed on their own line, such as a button at the end of a call-to-action. However, interactive elements can be inline elements as well. Think of links in a paragraph of text.

Inline interactive elements, such as text links in paragraphs, do not need to meet the 24×24 pixel minimum requirement. Just as margin is an exception in SC 2.5.8: Target Size (Minimum), so are inline elements with an interactive target:

The size of the target for pointer inputs is at least 24×24 CSS pixels, except where:

[…]

Inline: The target is in a sentence or its size is otherwise constrained×the line-height of non-target text;

[…]

Apple And Android: The Source Of More Confusion

If the differences between interactive elements that are inline and block are still confusing, that’s probably because the whole situation is even further muddied by third-party human interface guidelines requiring interactive sizes closer to what the level AAA Success Criterion 2.5.5 Target Size (Enhanced) demands.

For example, Apple’s “Human Interface Guidelines” and Google’s “Material Design” are guidelines for how to design interfaces for their respective platforms. Apple’s guidelines recommend that interactive elements are 44×44 points, whereas Google’s guides stipulate target sizes that are at least 48×48 using density-independent pixels.

These may satisfy Apple and Google requirements for designing interfaces, but are they WCAG-conformant Apple and Google — not to mention any other organization with UI guidelines — can specify whatever interface requirements they want, but are they copasetic with WCAG SC 2.5.5 and SC 2.5.8?

It’s important to ask this question because there is a hierarchy when it comes to accessibility compliance, and it contains legal levels:

Human interface guidelines often inform design systems, which, in turn, influence the sites and apps that are built by authors like us. But they’re not the “authority” on accessibility compliance. Notice how everything is (and ought to be) influenced by WCAG at the very top of the chain.

Even if these third-party interface guidelines conform to SC 2.5.5 and 2.5.8, it’s still tough to tell when they are expressed in “points” and “density independent pixels” which aren’t pixels, but often get conflated as such. I’d advise not getting too deep into researching what a pixel truly is-pixel%3F). Trust me when I say it’s a road you don’t want to go down. But whatever the case, the inconsistent use of unit sizes exacerbates the issue.

Can’t We Just Use A Media Query?

I’ve also observed some developers attempting to use the pointer media feature as a clever “trick” to detect when a touchscreen is present, then conditionally adjust an interactive element’s size as a way to get around the WCAG requirement.

After all, mouse cursors are for fine movements, and touchscreens are for more broad gestures, right? Not always. The thing is, devices are multimodal. They can support many different kinds of input and don’t require a special switch to flip or button to press to do so. A straightforward example of this is switching between a trackpad and a keyboard while you browse the web. A less considered example is a device with a touchscreen that also supports a trackpad, keyboard, mouse, and voice input.

You might think that the combination of trackpad, keyboard, mouse, and voice inputs sounds like some sort of absurd, obscure Frankencomputer, but what I just described is a Microsoft Surface laptop, and guess what? They’re pretty popular.

Responsive Design Vs. Inclusive Design

There is a difference between the two, even though they are often used interchangeably. Let’s delineate the two as clearly as possible:

  • Responsive Design is about designing for an unknown device.
  • Inclusive Design is about designing for an unknown user.

The other end of this consideration is that people with motor control conditions — like hand tremors or arthritis — can and do use mice inputs. This means that fine input actions may be painful and difficult, yet ultimately still possible to perform.

People also use more precise input mechanisms for touchscreens all the time, including both official accessories and aftermarket devices. In other words, some devices designed to accommodate coarse input can also be used for fine detail work.

I’d be remiss if I didn’t also point out that people plug mice and keyboards into smartphones. We cannot automatically say that they only support coarse pointers:

Context Is King

Conformant and successful interactive areas — both large and small — require knowing the ultimate goals of your website or web app. When you arm yourself with this context, you are empowered to make informed decisions about the kinds of people who use your service, why they use the service, and how you can accommodate them.

For example, the Glow Baby app uses larger interactive elements because it knows the user is likely holding an adorable, albeit squirmy and fussy, baby while using the application. This allows Glow Baby to emphasize the interactive targets in the interface to accommodate parents who have their hands full.

In the same vein, SC SC 2.5.8 acknowledges that smaller touch targets — such as those used in map apps — may contextually be exempt:

For example, in digital maps, the position of pins is analogous to the position of places shown on the map. If there are many pins close together, the spacing between pins and neighboring pins will often be below 24 CSS pixels. It is essential to show the pins at the correct map location; therefore, the Essential exception applies.

[…]

When the "Essential" exception is applicable, authors are strongly encouraged to provide equivalent functionality through alternative means to the extent practical.

Note that this exemption language is not carte blanche to make your own work an exception to the rule. It is more of a mechanism, and an acknowledgment that broadly applied rules may have exceptions that are worth thinking through and documenting for future reference.

Further Considerations

We also want to consider the larger context of the device itself as well as the environment the device will be used in.

Larger, more fixed position touchscreens compel larger interactive areas. Smaller devices that are moved around in space a lot (e.g., smartwatches) may benefit from alternate input mechanisms such as voice commands.

What about people who are driving in a car? People in this context probably ought to be provided straightforward, simple interactions that are facilitated via large interactive areas to prevent them from taking their eyes off the road. The same could also be said for high-stress environments like hospitals and oil rigs.

Similarly, devices and apps that are designed for children may require interactive areas that are larger than WCAG requirements for interactive areas. So would experiences aimed at older demographics, where age-derived vision and motor control disability factors tend to be more present.

Minimum conformant interactive area experiences may also make sense in their own contexts. Data-rich, information-dense experiences like the Bloomberg terminal come to mind here.

Design Systems Are Also Worth Noting

While you can control what components you include in a design system, you cannot control where and how they’ll be used by those who adopt and use that design system. Because of this, I suggest defensively baking accessible defaults into your design systems because they can go a long way toward incorporating accessible practices when they’re integrated right out of the box.

One option worth consideration is providing an accessible range of choices. Components, like buttons, can have size variants (e.g., small, medium, and large), and you can provide a minimally conformant interactive target on the smallest variant and then offer larger, equally conformant versions.

So, How Do We Know When We’re Good?

There is no magic number or formula to get you that perfect Goldilocks “not too small, not too large, but just right” interactive area size. It requires knowledge of what the people who want to use your service want, and how they go about getting it.

The best way to learn that? Ask people.

Accessibility research includes more than just asking people who use screen readers what they think. It’s also a lot easier to conduct than you might think! For example, prototypes are a great way to quickly and inexpensively evaluate and de-risk your ideas before committing to writing production code. “Conducting Accessibility Research In An Inaccessible Ecosystem” by Dr. Michele A. Williams is chock full of tips, strategies, and resources you can use to help you get started with accessibility research.

Wrapping Up

The bottom line is that

“Compliant” does not always equate to “usable.” But compliance does help set baseline requirements that benefit everyone.

To sum things up:

  • 24×24 pixels is the bare minimum in terms of WCAG conformance.
  • Inline interactive elements, such as links placed in paragraphs, are exempt.
  • 44×44 pixels is for WCAG level AAA support, and level AAA is reserved for specialized experiences.
  • Human interface guidelines by the likes of Apple, Android, and other companies must ultimately confirm to WCAG.
  • Devices are multimodal and can use different kinds of input concurrently.
  • Baking sensible accessible defaults into design systems can go a long way to ensuring widespread compliance.
  • Larger interactive element sizes may be helpful in many situations, but might not be recognized as an interactive element if they are too large.
  • User research can help you learn about your audience.

And, perhaps most importantly, all of this is about people and enabling them to get what they need.

Further Reading

]]>
hello@smashingmagazine.com (Eric Bailey)