Kuatsu Logo

Blog

/blɔɡ,Blóg/ - noun - in our case, a place where we share our thoughts, opinions, ideas, successes and sometimes failures with you.
October 8, 2024

How to Harness the Full Potential of Dark App Icons in iOS 18

App Icon Cover
Robert Katzki Jbtf M0 X Be Rc Unsplash
July 28, 2024

Cultural Significance of Colors: A Guide

Colors are not only an essential part of design, but they also often carry a deep cultural significance that can vary from region to region. For a designer in the West, pitfalls that arise when working with certain colors in international brand messages are often not immediately apparent. In an increasingly globalized world, it is therefore essential to understand the cultural connotations of colors. This guide is intended to serve as an introduction to this topic. THE PSYCHOLOGY OF COLORS AND THEIR GLOBAL PERCEPTION YELLOW – SUN AND OPTIMISM Yellow is almost universally associated with positive attributes. In Western cultures, it represents happiness and optimism, while in Japan it symbolizes courage. However, caution is sometimes advised: in regions like parts of Mexico, a bright yellow can be associated with death, or in some Slavic cultures, yellow gifts may be considered bad luck. It's also important to note that yellow is often associated with danger, as it is frequently used in warning signs. GREEN – NATURE AND FRESHNESS Green is the color of nature and is often associated with freshness and new beginnings. In Ireland, green is even a national color, symbolizing luck (think of the shamrock). However, in some Far Eastern cultures, green can also evoke negative associations like infidelity. A very specific example is China: a man wearing a green hat signals that his wife has committed adultery. RED – PASSION AND LOVE Red is perhaps the most polarizing color in the spectrum, having the most significant impact on human emotions. While it symbolizes passion, love, and energy in many Western countries and often carries positive associations in Asian cultures, extreme caution is advised in some African countries. Here, red is often associated with death and danger, or, as in Nigeria, with aggression. BLUE – TRUST AND CALM If you ask someone about their favorite color, there's a high chance you'll hear "blue." Blue is one of the most popular colors worldwide and is often associated with trust and stability. In India, blue is associated with the deity Krishna and thus with love and divine joy. Compared to all other colors on the spectrum, blue is one of the safest choices. It is positively associated in almost every culture, making it a color you can use without hesitation. ORANGE – ENERGY AND CREATIVITY Orange is a vibrant and fruity color that stimulates energy and creativity. It is a popular choice in the Netherlands ("Oranje"), where it is closely associated with the royal family, as well as in Buddhist traditions, where it is connected with the highest enlightenment. In Hinduism, saffron, which has a delicate orange color, is considered sacred. Like blue, orange is a very safe color choice when it comes to avoiding negative associations in other cultures. Orange has positive connections in almost every culture. PURPLE – LUXURY AND SPIRITUALITY Purple has a long association with nobility and luxury, not least because of its rarity in nature. For a long time, only the wealthiest and most powerful people could afford this color. This connection with luxury persists in many cultures to this day. It is also a color used in religious contexts to symbolize spirituality and piety. In the United States, the Purple Heart is the oldest military award still given out. However, caution is sometimes advised: in some cultures, such as in Thailand and Brazil, purple is associated with mourning. This is also partially the case in the United Kingdom. WHITE AND BLACK – OPPOSITES IN SYMBOLISM White is often associated with purity and innocence, particularly in Western cultures. Black often symbolizes mourning for us. In Asian cultures, this is often reversed: white is often worn at funerals in China and South Korea. In Africa, black can stand for wisdom and sophistication. Generally, these associations are highly context-dependent. CONCLUSION Understanding the cultural significance of colors can help designers communicate brand messages more accurately. It's not just about choosing aesthetically pleasing colors, but also considering the emotional and psychological impact on an international audience. Using colors incorrectly or thoughtlessly can cause significant damage to a brand. However, using them wisely can send precise and intuitive signals to users and customers.

Eine Frau plant die User Journey einer App als Skizze auf Papier.
January 24, 2024

Want To Have an App Developed? Here's How.

Developing and publishing an app is no easy task. There are numerous pitfalls to consider, starting well before the actual development begins. With proper preparation, you can ensure that the process runs faster, safer, and more smoothly for both you and the agency you hire. Even if (at least we hope!) you have been completely convinced of your app idea for weeks and can hardly wait to get started, it's important to understand that the developer must also fully understand your idea. Therefore, it is of absolute importance that your idea has already taken on a clear form and structure to prevent costly misunderstandings later. What you need to keep in mind before the initial meeting with the app development agency is explained in this article. CONCEPTUAL AND TECHNICAL FEASIBILITY If you have designed a solution for a problem that you wish to bring to potential users in the form of an app, you should first think about two fundamental things: 1. Can the concept be reconciled with guidelines, laws, etc.? 2. Is the technical feasibility given? Although you will usually discuss both of these questions, especially the second, in the initial meeting with the developer, it is important to think about them beforehand. Both the App Store and the Google Play Store have a long list of guidelines that apps must fulfill to be distributed in these stores. Unfortunately, some ideas and concepts fall through the cracks even before development begins. A first point of reference should be the App Store Review Guidelines by Apple. According to experience, Apple is much stricter in reviewing new apps than Google, so it can generally be assumed that an app that is approved for the App Store will also be approved for the Google Play Store. Therefore, it’s worth studying the App Store Review Guidelines before further planning to identify possible problems with the developed concept. A professional app agency is well-versed in these guidelines and will let you know during the initial meeting whether the idea is feasible or if conceptual adjustments need to be made. After all, it is better if potential guideline violations are uncovered before development begins to avoid large costs and later disappointment. Even if you may not have a large technical understanding, you should at least try to visualize the complete user journey of your app to identify potential technical feasibility issues. Whether and to what extent an app can actually be technically implemented in the end, your developer will of course answer this for you - but it pays off if you already have a rough overview of the possible peculiarities of your app from a user perspective before contacting them. NATIVE, HYBRID, OR PERHAPS WEB APP? The possibilities of how an app can be developed have multiplied in recent years. This is not least positive for you: Whereas apps used to have to be developed natively for all desired operating systems separately, today, technologies like hybrid apps make it possible to have an app developed in a fraction of the time and with a much smaller budget. But not every app is suitable for development as a hybrid app. Let's start - without getting too deep into details - from the beginning: NATIVE APPS A native app is an app specifically developed for an operating system, such as iOS. An app developed natively for iOS will not work on Android. If the app is to be offered to Android users as well, it must therefore be developed twice. Compared to hybrid or web apps, this is initially a gigantic disadvantage (see below) - but native apps certainly have their justification. Especially for apps that rely heavily on performance, development as a native app is recommended. Only this way can an app be tailored to 100% to the respective operating system. The code of your app runs without an intermediate layer and communicates directly with the operating system. Natively developed apps are therefore by far the fastest and most performant (as long as the developer knows what they are doing! ¯\_(ツ)_/¯). Despite all of this, native apps make up by far the smallest portion of the apps that leave our app agency. WEB APPS A web app is essentially an app that runs on the web. Such an app doesn’t necessarily have to be distributed through the smartphone app store. If you visit, for instance, twitter.com on your computer, you are using a web app. If an app is to work completely independently of the device used and does not need access to specific smartphone functions such as push notifications and background actions, a web app is indeed a reasonable choice. Users can use it from virtually any device. It is also theoretically possible to "package" a web app as a mobile app and distribute it through the app store. But caution is advised here: While this was long common practice (before there were hybrid apps) to avoid the relatively high development costs of native apps, packaged web apps are now generally unwelcome in the two major app stores. Thus, aside from very few exceptions, you cannot publish a mobile app that simply displays a web browser with a web app in the app store. And in any case, we generally advise against such an endeavor: Since the app ultimately runs in a simple web browser, performance is extremely poor compared to the other options. Even a layperson can clearly distinguish such an app from a properly optimized native or hybrid app for smartphones. To clarify this once again: Of course, all this does not apply if you do not plan to distribute the app in the app store. If you deliberately want the app to run in a web browser and accept certain limitations such as missing hardware functions and lower performance on smartphones, a web app is an excellent and not least very inexpensive way to develop an app. HYBRID APPS When it comes to smartphone apps, hybrid apps are by far the type of apps we develop most as an agency. And for a good reason: Hybrid apps combine the performance and access to hardware functions of native apps with the easy and cost-effective development of web apps. Hybrid apps are developed only once and can then be "converted" to a native app for both major smartphone operating systems (iOS and Android). This technology is long past its infancy: Hybrid apps developed with modern frameworks like React Native or Flutter are hardly distinguishable from "true" native apps. It’s also effortlessly possible to utilize nearly every operating system functionality and hardware feature of modern smartphones. Although native apps still have the performance lead slightly, the difference is unnoticeable for over 90% of apps. We've written a comprehensive article, including pros and cons, on our agency favorite, React Native for you. We will advise you on this question in-depth in a non-binding initial consultation - but it makes sense to think about it in advance. Where do I want to distribute my app at all? In the app store, or is a web app enough? Does my app need access to certain hardware functions like GPS? Does it perform complex calculations requiring a high level of performance? If you already have a rough idea of these questions, it helps the developer enormously in assessing your project. MAKING MONEY WITH YOUR APP Two questions you should definitely ask yourself before starting your app project: 1. How will my app become successful, or how do I get users to use it? 2. How do I make money with my app? Although, of course, it doesn’t always have to be about making money, and you might just want to implement a passion project: Developing an app is often an expensive endeavor, and you should consider whether and how you can at least recoup the costs. There are virtually endless possibilities for this. Besides more or less "universal" possibilities like banner advertising, digital products and subscriptions (premium features) can often play an important role. And even if you are presumably completely convinced of your app's concept, unfortunately, a good idea alone is not enough in most cases: Users must also find your app and remain attached to it long-term. This opens up completely new fields, which you will be comprehensively advised on during the initial consultation with your app agency. However, it is worthwhile to have the concept largely worked out and also have a plan on hand for the time after development on how to get the app to the audience. DEVELOPMENT PROCESS There's a high chance that your app idea is your first project of this kind. There are therefore countless new things to know. Perhaps you're wondering how the development process of an app actually looks. Before the actual development begins, the concept is first discussed and perfected together during an initial meeting. Particular attention is paid to the individual challenges in your project and solution options are proposed. Once the concept is watertight, you receive an offer. Depending on whether you already have an app design or need help with publication, this offer includes various services. Then, if such a thing doesn't yet exist, the app's design is developed. Wireframes, or very rough sketches, of the app are usually created first. These are initially meant to convey the basic layout and user journey, i.e., how users will navigate through the app. This can then lead to a finished UI design. The designer is in close consultation with you to ensure that the app fully meets your requirements and wishes. Once the UI design is also finished, there usually comes a kickoff meeting, in which any remaining open questions and the further concrete proceedings are discussed. Once everything is planned and discussed, the actual development can begin. This is usually done in so-called "sprints": A certain number of functions or tasks are completed in a period of, for example, 14 days. At the end of such a sprint, there is a sprint review, during which the participants evaluate the results of the sprint together and determine the tasks for the next sprint. This way, your app is created piece by piece, and you can follow the progress closely. Once the app's development reaches a certain point, a test version is provided to you. At this stage, the app is not yet distributed through the app store but through designated test programs, allowing you to install the preview version of your app on your device. At the end of the successful development is the publication. Besides the app itself, numerous other pieces of information, such as descriptions, screenshots, and meta-information, are required. It's worth working with an experienced agency here, which can tell you what's required besides the app itself to successfully publish your personal app from the start of the conception and development process. CONCLUSION: HAVING AN APP DEVELOPED In this article, you have likely been overwhelmed with new information. However, do not worry: A professional app developer or app agency will guide you through every step of app development. Only this way is successful development and publication possible. If you want to take some work off of your service provider's plate and ensure clearer structures, it is worth preparing for the questions described in this article.

React-Logo auf abstraktem Hintergrund
January 6, 2024

App Development with React Native: Pros and Cons

In recent years, a lot has changed in the field of app development. Gone are the days when apps had to be developed separately for each operating system to be supported. Hybrid apps are the new trend. For good reason: Hybrid development saves enormous amounts of time, costs, and other resources. The vast majority of apps that leave our app agency are developed in React Native. Many clients ask us: What is it that allows you to implement my app in a fraction of the usual time? We provide you with an overview, including pros and cons. REACT NATIVE IN A NUTSHELL Heads up, this gets a bit technical — but we’ll keep it short for clarity. React Native is a framework (essentially a toolkit) for developing primarily, but not exclusively, smartphone apps. It is based on the UI framework React and extends it, as the name suggests, with the ability to develop native apps for different operating systems. While "true" native apps are developed directly for a specific operating system, React Native apps are developed "universally." The written code is then packaged into a native iOS or Android app. Through a so-called bridge, which is also included in the native app generated by React Native, the packaged code can then communicate directly with the operating system. This makes it possible to use software and hardware functions that pure web apps, for example, do not have access to. UI elements are rendered directly through the native app and don’t need to communicate over the bridge. This gives React Native apps outstanding performance, which doesn’t have to hide from native apps. WHO USES REACT NATIVE? The chance is high that you have at least one app (probably many more) installed on your smartphone that was developed in React Native. React and React Native are primarily developed by the internet giant Meta, but both have an enormously large open-source community. Among others, the apps of Facebook itself, Amazon including Kindle, Microsoft Office, Pinterest, and many more were developed in React Native. You can see an overview of other apps in the Showcase on the React Native website. However, this is only a tiny snippet. You probably use more apps developed with React Native than you think — and you won’t be able to notice the difference from native apps. THE TURBO FOR YOUR APP PROJECT The big advantage of React Native (as with other similar frameworks like Flutter) is that an app only needs to be developed once. What sounds trivial has a huge impact on the time and budget required. The universal React Native code can then be exported as a native app for any possible operating system. In most cases, that's iOS and Android. However, it is equally possible to develop web apps, applications for macOS and Windows, and even the Apple TV with React Native. All with a single codebase. Granted: Here and there, device- and operating system-specific adjustments and optimizations must be made. Not everything that works smoothly on iOS runs directly on Android. The effort for this is so marginal compared to developing two separate apps for each operating system that it can be almost completely neglected. In the end, it is therefore possible to develop an app for iOS, Android, macOS, Windows, and the web for the price and time of one app. OPEN-SOURCE AND THE ECOSYSTEM Another point that React Native can boast is the community behind it. While React Native provides a solid basis, there are countless libraries from volunteer open-source developers, which enable apps to be implemented even faster and more efficiently without having to reinvent the wheel at every corner. The app needs a navigation with a tab bar at the bottom of the screen? Sure, someone has already solved that. The app needs access to the camera and GPS? There are several handfuls of freely accessible modules just for that. However, there are two dangerous traps here: On the one hand, one could be tempted to sift through the plethora of open-source libraries and clutter the app with completely irrelevant functions and UI elements for the user. Not only does this affect the app’s performance, but it also adds nothing for the user and makes the app more complex than it needs to be. The much greater danger lies in the choice of modules to use: It must be noted that these are ultimately only developed on a voluntary basis. Nothing prevents a maintainer of an open-source library from discontinuing development from one day to the next. If you want your app to continue to be developed, this usually means that the affected module must be replaced or replaced with your own code. This results in double work in part. It is therefore extremely important to have an experienced and React Native-specialized agency on hand that is familiar with the ecosystem around React Native and knows which modules can be used to what extent without risking shooting yourself in the foot in the long run. EXPO – REACT NATIVE ON SPEED Particularly noteworthy in the React Native ecosystem is Expo. There is virtually no app that leaves our agency without using at least one module from Expo, or being fully integrated into Expo's own ecosystem. Expo is a veteran in the React Native community and has been actively advancing its development for years. A completely new system of frameworks, modules, and tools has now emerged that greatly simplifies working with React Native. Here too: The app needs access to the camera? There’s an Expo module for that. Need access to the user’s location? No problem, Expo has a library for that. For a long time, however, using Expo modules had the disadvantage of "locking" oneself into the Expo system (vendor lock-in). Their modules could only be used if one also used Expo's toolkits, which in turn meant that React Native’s own toolkits could no longer be used. Moreover, if you used Expo, many libraries from other providers couldn’t be used either, which also meant that some software and hardware functions, such as in-app purchases, couldn’t be implemented at all if you relied on Expo. In rather inexperienced circles of app developers, Expo is still somewhat discredited. However, this is completely unfounded: At the latest since November 2021 and the release of the Expo Application Services (EAS), this vendor lock-in no longer exists. All Expo modules and even Expo’s toolkits can be used without locking oneself in as a result. Using libraries from other providers is also no longer a problem with the simultaneous use of Expo. At the same time, compiling (i.e. packaging the code into a runnable app) and distributing React Native apps has become much easier thanks to Expo EAS. Today, the rule is: Anything that can be done in React Native can also be done in Expo — but faster, more comfortably, and efficiently thanks to the comprehensive toolsets. Those interested in a slightly more technical explanation of EAS will get a good explanation of the difference from the previous workflow in this post by Expo. HOW FUTURE-PROOF IS REACT NATIVE? With all these advantages, you might be wondering if this is future-proof. After all, React Native is just another layer of your app that may fall away someday. Here too, React Native shines with the strong support of the open-source community. Even if Meta should ever decide not to continue developing React Native, the chances are extremely high that another developer team will take on this role. Numerous business models have now emerged around React Native, not least the aforementioned Expo Application Services, which means a lot of money is involved in the system. It is therefore almost impossible for React Native to disappear or stop being developed overnight. Even the thought that Meta loses interest in continuing to develop React Native is quite far-fetched: After all, Meta probably has little interest in having to rebuild a large part of its own apps, which itself rely on React Native. With an app developed in React Native, you are opting for a future-proof option that not only makes your app development faster and more efficient but also more cost-effective. DISADVANTAGES AND LIMITATIONS Having raved about React Native in this article so far, it’s only fair to point out disadvantages, and where a native app might be the more sensible option. One thing upfront: If you hire a developer who advises against React Native because feature X or Y cannot be implemented and only a native app would be possible, that developer unfortunately has a fundamental misunderstanding of React Native. As described at the beginning of this article, React Native is not just a nice-looking web app. Even if there isn't a ready-made module for a specific function, it is possible to develop this function natively for the desired operating systems and let the native code communicate with the universal hybrid code through the bridge. So, there still does not have to be a completely separate app developed for each operating system; only the desired function itself needs to be developed separately. The business logic based on this native function is then found in the universal code, which only needs to be developed once. For this to be possible, however, it requires a developer or agency deeply familiar with React Native and specialized in developing with it. A rather inexperienced developer will only be able to rely on open-source modules, unable to develop their own native module if there is no ready-made solution. This will quickly lead them to advise you to simply develop the entire app natively. A costly mistake. Nevertheless, there are also situations where a native app makes more sense. However, these are so rare that we can count them on two hands in our agency's history. Mainly, these situations involve the required performance of the app. If it relies on peak performance, React Native is out of the question. However, the likelihood is high that you think your app requires more performance than is actually necessary. Speed that React Native can no longer offer is typically only needed by apps that handle complex 3D visualizations, augmented reality, video editing, or the like, as well as by 3D-based video games. We would be happy to advise you on this with regard to your app project in a non-binding initial conversation.

Dekoratives Foto eines an die Wand gelehnten iPhone 14 Pro mit geöffnetem App Store
January 4, 2024

How to Actually Publish an App in the App Store?

If you want to distribute a mobile app to potential end users, typically there is no way around the Apple App Store and Google Play Store. But what exactly is required to list and distribute an app in the two largest app stores? And: What does it all cost? We explain this and much more in this article. THE APP STORE DUOPOLY If you want to download an app to your smartphone, whether it's an iPhone or an Android device, there's a high chance you'll open the App Store or Google Play Store. Aside from smaller storefronts of certain smartphone manufacturers like Huawei, these two large stores have effectively had a duopoly on the app market since the beginning. There's practically no getting around them when downloading a mobile app. On Android devices, there is theoretically the option to obtain and install apps directly from the internet (outside of the Google Play Store or other closed ecosystems), but this involves opening security restrictions on the device. And even though Apple is currently preparing for a similar opening of iOS due to new EU regulations, most users still end up in the App Store. In other words, if you want to publish an app for end users and ideally make money with it, you are bound to the two major app stores. REQUIREMENTS AND GUIDELINES Both Apple and Google impose extensive requirements on apps to be published. So if you want to publish an app, you are not only tied to the two major players due to market conditions, but you also have to adhere to their rules. Both companies scrutinize every single app upon initial release and practically every submitted update. From experience, Google is far more lenient in these reviews, whereas Apple is much stricter, which can sometimes require two or even three attempts for an app to be accepted in the Apple App Store. Not least for this reason the Google Play Store has almost twice as many apps as the App Store. However, this does not mean that Google gives developers a free pass. There are also certain guidelines that must be followed here to avoid rejection of the app (possibly even permanent). Therefore, the development and publication process of an app should ideally be carried out by an agency experienced in this field. For context: The App Store Review Guidelines from Apple are nearly 15,000 words long and cover every little detail from design requirements to technical conventions to conceptual constraints. Especially with the first, own app, it's easy to stumble over one of the many rules. Guidelines must be considered not only during development and publication but also in the conception of the app. HOW MUCH DOES PUBLISHING COST? Without a doubt, the design and development process is naturally associated with the highest costs in an app. Nonetheless, you should not overlook the costs of publishing in the two app stores in your calculations. Again, the approaches of Apple and Google differ significantly. While Google requires a one-time registration fee of 25 USD for developers, which covers all publications in the Google Play Store and other conveniences, Apple is more expensive. With Apple, an annual fee of 99€ is due to keep the App Store account active. If the fee is not paid, the account and thus the placement of the own app in the App Store is deactivated. Unfortunately, it doesn't stop at the registration fee – at least if you plan to make money with your app. Here, both Apple and Google again reach into the pockets of developers. While Google holds onto a more or less fixed fee of 30% on all in-app purchases, Apple offers the option to apply to have the fees (which also amount to 30%) reduced to "only" 15% up to a maximum turnover of 1 million USD (more on this below). There are also various special regulations in both stores, for example, when reaching a certain minimum duration of a subscription. Especially with Apple, there are options for certain types and sizes of businesses to have the registration fee and/or the fees for in-app purchases waived or reduced. This applies, for example, to non-profit organizations. Here, too, it would be best to seek advice from an experienced agency since the forms are sometimes (probably deliberately) difficult to find, and the process is lengthy. We have successfully accompanied this process with several clients – feel free to contact us. CAN THE FEES BE BYPASSED? While the registration fees (except for some exceptions as described above) are practically set in stone, since the app otherwise won't reach interested users, one might think the fees for in-app purchases could be easily bypassed. Not a few publishers have come up with the idea to simply integrate their own billing systems – whether credit cards, PayPal, or others – into the app and bypass the 30% fees. Unfortunately, this typically runs afoul of the app guidelines of both stores. For most types of apps, implementing your own billing systems for the "protection of users" (sic!) is not allowed. Certain business models like delivery apps are an exception. Perhaps you have noticed that you cannot purchase a premium subscription in Spotify's app. Due to this very regulation and the high fees, Spotify has deliberately removed this function from its own app. If users wish to purchase a premium subscription, they must do so via Spotify's website. The guidelines of the App Store even go so far as to prohibit pointing users to the fact that a subscription can be purchased on the website. This practice is currently a hot topic at the EU level and will likely have to be loosened by Apple and Google sooner or later. Until then, however: unless you're Spotify and can afford to distribute your products only outside of the app due to your market position, we'd advise against such a step. The lost number of paying users is likely to be greater than potential profit increases from bypassing the fees. WHAT HAPPENS AFTER THE RELEASE? With the successful release of the app, the work is not over: both Apple and Google regularly update their guidelines and also set deadlines for existing apps to comply. Therefore, ongoing app maintenance is essential for any serious app project. Apps must also be updated to accommodate new operating system versions, screen sizes, etc. As an experienced and above all purely app-development specialized agency, we naturally support you in this process from A to Z and show you the challenges associated with your project in a non-binding initial consultation. So if you are looking for a professional app agency in Frankfurt or remotely, feel free to contact us.

Frisch gebackene Kekse auf einem Keksblech
January 25, 2023

Where is your cookie banner?

Perhaps you have developed a reflex like we have: Every time you open a new website, your mouse pointer automatically moves to the center to prepare to dismiss the cookie banner that you have a love-hate relationship with. You might have been even more surprised when you visited our website. Where is our cookie banner? The answer is not a malfunction of your browser or a subconscious and forgotten dismissal of the banner: We don't have a cookie banner. But how is that possible? A look into the development of privacy protection on the internet and why warning lawyers have little chance with us (and how your website can also deter them). COOKIES: WHY, WHAT FOR, HOW COME With the emergence of the "Web 2.0" phenomenon, it seemed like a new method was developed weekly to track website visitors' surfing behavior as effectively as possible. This ranges from capturing the IP address to bizarre methods of "fingerprinting" users through browser attributes such as screen resolution, to storing a small amount of data on the user's computer: the cookie. Over the years, however, the use of such tracking measures has escalated and (at least in the European Union) has rightly ended in a tough legal approach against them: The European General Data Protection Regulation (GDPR) and the German Federal Data Protection Act (BDSG). Even though these legal changes had a long lead-up, it suddenly felt on May 25, 2018, as if the deluge had broken over the internet overnight. With the GDPR and the sudden surprise that personal data is legally protected, new, privacy-friendly methods had to be developed to perform analysis. At least some might have thought. The reality was quite different: Legal grey areas like the (now overturned) "US-EU Privacy Shield" or cookie banners arose to continue the invasive tracking methods. TRACKING WITH MINIMAL DATA Even before the GDPR took effect, we already refrained from using tools on our website that were known to handle user data carelessly. This included, in particular, Google Analytics, which, due to new legal precedents, is now also losing ground in the EU in practice. But how do we manage to get by without cookies and thus without a cookie banner? ANALYZING THE STATUS QUO If no cookies or other invasive tracking mechanisms are used, no banner is needed. Simple enough, really. This applies only to non-essential cookies. Technically necessary cookies are very few (such as cookies that store the contents of a shopping cart). However, the reach of cookies is not so well understood by many: Almost every third-party tool you integrate into your website stores cookies on the user's computer and/or phones home. Often, these involve US companies, which makes the data transfer especially problematic after the invalidation of the "Privacy Shield". But it doesn't have to be that way: There are enough privacy-friendly alternatives for practically every problem. The first step must be to analyze the current status or the cookies currently stored by your own website. For this, you can either use browser functionalities (the "Web Inspector") or use tools like PrivacyScore. Record all cookies and connections to third parties and analyze or research where they come from. DATA-EFFICIENT ALTERNATIVES Once you know where your website is calling home and what data is being stored on the user's computer (and especially why), you can replace these third parties with other providers dedicated to the protection of personal data. One thing in advance: The big disadvantage (or advantage?) here is that many of these tools are fee-based. Where no money can be made from your users' data, money must be made differently. Many of these tools are open-source and can be hosted on your own servers free of charge (which is even more privacy-friendly!), but this requires technical understanding or at least a good IT department (sensibly with technical understanding as well). Below, we list some of the tools and methods we use to operate as data-efficiently as possible and completely avoid tedious cookie banners. GOOGLE ANALYTICS Let's be honest: The elephant in the room is still Google Analytics. We are already too accustomed to this powerful tool. Unfortunately, it is among all the tools you can integrate into your website the one that handles user data the most carelessly. Various court rulings have now also ensured that the use of Google Analytics in the EU is practically impossible. A privacy-friendly alternative we use is the tool Simple Analytics from the Dutch company of the same name. Simple Analytics completely refrains from setting cookies. When the website is opened, a script from Simple Analytics is retrieved (from European servers), which takes over the task of tracking the user during their website visit. However, it explicitly refrains from tracking the IP address or other personal data. The tracking is done entirely via the HTTP referrer. Without getting too deep into technical details, you can learn more here if you're interested. Another alternative is Matomo, which can also be hosted on your own servers. GOOGLE MAPS Google Maps was long the only connection to a third-party server on our website. Unfortunately, there is still no privacy-friendly, yet easy-to-implement alternative. On our old website, we therefore used OpenStreetMap. The problem? Basically, it comes without setting cookies, but the retrieval of map data still happens from OpenStreetMap servers, which are preceded by Fastly CDN (a content delivery network) operated on American servers. Additionally, OpenStreetMap's privacy policy itself is not compliant. Therefore, using OpenStreetMap out of the box is not legally compliant in terms of data privacy. So we helped ourselves with a little trick that we also use for other third-party services: We placed an HTTP proxy between the connection from your browser and the OpenStreetMap servers. When retrieving the map, your browser does not request the map data directly from OpenStreetMap, but from our server. Our server then forwards the request to OpenStreetMap. OpenStreetMap can therefore only see the access through the IP address of our server. Your IP address remains hidden. OpenStreetMap can also be operated privacy-compliantly by hosting a so-called "tile server" yourself. However, the installation and maintenance of such a server is so outrageously complicated that it probably won't be worth it for most users. Here we can hope that in the future, the market will open up a bit more and easily usable "out of the box" privacy-compliant options will be added. ZOOM & CO. Zoom, Microsoft Teams, Google Meet... do the hairs on the back of your neck stand up too? Many of these video conferencing tools are operated by US companies and are therefore not necessarily the best options in terms of data privacy. However, unlike with map services, there are numerous privacy-friendly alternatives on the market. One popular option is Jitsi Meet, as it is fully open-source and can be deployed on your own hardware. With a few small additional configurations, like disabling Gravatar integration, no connections to third-party servers are made anymore. Even out of the box, Jitsi already meets the applicable data protection regulations. Those who prefer a cloud solution might find the European video conferencing software Whereby appealing. GOOGLE FONTS Admittedly, perhaps somewhat off-topic, as no cookies per se are stored, but at least as important due to its still widespread use: Google Fonts. Basically, the use of the beautiful fonts provided by Google is not a data protection problem. We also use one ("Inter"), which allows you to read this text! The problem that still persists on many websites and has led to a real wave of warnings in recent weeks, is the integration of the fonts via link tag to Google's servers. Instead, the fonts must be integrated locally, meaning they must be hosted on the same server as the rest of your website. The integration is not too complicated and is further simplified by useful tools like the "Google Web Fonts Helper".

Bild eines Roboters als Sinnbild für KI
August 26, 2022

AI in Practice: How It Can Support Your Business

As a modern digital agency, we always strive to keep educating ourselves and grow with our environment. One technology that has rapidly gained momentum in recent years and is now a term familiar to everyone is Artificial Intelligence (AI), or more precisely, Machine Learning. This technology has recently penetrated numerous industries, making workflows simpler, more efficient, and cost-effective. We recently had the opportunity to develop an ML project from scratch for a client and test it in practice. Here's an experience report. WHAT IS MACHINE LEARNING ANYWAY? Machine Learning is a discipline within the broader field of "artificial intelligence." Traditionally in software development, a developer wrote rules that a computer could use to process inputs and produce outputs. In the classical approach, thus, the inputs and rules were always known in advance, while the outputs were the unknowns in the equation. Machine Learning turns this concept somewhat on its head: Instead of providing the computer with inputs and rules, we leave the rules as unknowns and only provide the computer with the inputs along with their corresponding outputs. This might sound a bit strange initially, but it's easy to explain, at least superficially. We feed a Machine Learning model with thousands of input-output combinations (training data), upon which the computer "learns" the rules for deducing an output from a given input. It does this by examining the training data in numerous iterations (also known as "epochs"), slightly altering the weighting of connections between the individual neurons defined in the AI model. Thus, a Machine Learning model functions similarly to the human brain, but is much smaller and less complex. An AI model cannot think independently or become empathetic. Instead, it can only learn and operate within predefined boundaries, i.e., the provided training data. If everything goes right during training, an optimal AI model can generalize from the given training data, applying the learned rules to new, unseen inputs to arrive at conclusive results. To achieve this, numerous components are essential, including the quality and quantity of training data, its preparation, the number of epochs, the hyperparameters (which we'll get to later), and much more. FOOTBALL IS PURE CHANCE! OR IS IT? A while ago, a potential client came to us with a clear task: "Develop a football formula!" Okay, granted: maybe not exactly in those words, but the aim of the project was to calculate winning probabilities in football for an international sports platform. This principle might already be familiar to some in the realm of sports betting. Betting companies (bookmakers) calculate probabilities for various game events for each game that can be bet on their platform, such as the winner of the match or the number of goals. Bookmakers use a variety of algorithms for this purpose. However, the principle of machine learning is still relatively new. The question the client wanted to explore and answer with this project was: "Can a Machine Learning model beat the bookmaker?" After agile refinement of the project requirements and collecting thousands of datasets from past matches in various football leagues, including the Bundesliga, we started implementing the solution. THE FIRST TEST MODEL To familiarize ourselves with the data and make initial attempts, we set up a first test model. Before diving into the actual development of the Machine Learning model, it is crucial to prepare the training data so a neural network can actually work with it. For the first model, we used the following data points as input parameters: * Home team ID * Away team ID * Arena ID * Season (year) * Matchday * Weekday * Evening match * Home team league points * Away team league points In our view, this represents a solid foundation to make initial, very simple predictions about the outcome of a match. Identifiers like those for the teams and the arena were "one-hot encoded." If we didn't encode them this way, an ML model might consider them as simple numeric values and draw false conclusions. Therefore, it's important to separate them clearly, often achieved in practice through one-hot encoding, transforming data values into a one-dimensional array of zeros and ones, where exactly one value is a one, and all other values are zero. Other values like the matchday or league points were retained as numeric values. Additionally, for simplicity's sake, we removed all datasets from this model where one or more of these data points were missing. This ensured that the ML model learns with the "purest" and simplest data possible. DESIGNING THE MODEL The next step was to design the actual ML model. Again, for the first prototype, we opted for the simplest possible model to optimize and fine-tune it best afterward. For the development of the model, we used Google's well-known framework Tensorflow and the related or abstracting framework Keras. Keras makes it very easy to design simple models with "predefined" network layers. Without going too deep into the technical side, suffice it to say that after several attempts, we ended up with a model featuring an incoming flatten layer (which converts the combination of one-hot encoded and numeric values into a simple, one-dimensional array), two hidden dense layers with associated dropouts, and an output layer. RESULTS AND EVALUATION Fortunately, just as the first prototype was completed, the new season of the Bundesliga began. A perfect moment to try out the AI live. After everyone in the agency placed personal bets, it was AI's turn. Initially, an admittedly simple game was up: Eintracht Frankfurt vs. FC Bayern Munich. The AI predicted a 66% chance of a win for Munich. Had Munich had the home advantage instead, the AI predicted a win probability of over 75%. Logical - and ultimately correct. Bayern won the game 1:6. Interestingly, the predicted win probability for Munich continued to rise with each matchday. This signaled that the AI was missing crucial data at the start of the season to make truly convincing predictions. The graphical evaluation based on test data (i.e., datasets that the AI did not see during the training process but were only used for model evaluation) was rather sobering, albeit totally expected: You can see here that the precision of the AI's predictions converges around 50%. Thus, the precision is definitely much higher than the pure random chance of 33% with three possible match outcomes (home win, draw, away win). However, no war can be won against the bookmakers' odds with 50%. THE SECOND MODEL "JARVIS" So back to the drawing board. Based on our experiences with the first model, it was fine-tuned, the data points used were adjusted, and the data preparation was slightly modified. As this model is the one used productively by the customer, we, of course, cannot disclose too many details here. However, we supplemented the data points, for example, with the starting line-ups and numerous details about the individual players. Thus, the model has the opportunity to consider and compare the line-ups against each other. The age of the datasets was additionally limited, so no datasets from very old seasons, such as 2011, flowed into the training. Finally, we fine-tuned the model's hyperparameters. The optimal hyperparameters, such as the optimizer used, the number of hidden layers in the model, or the loss function used, are the subject of numerous discussions in developer communities and science and are individual for each model. A popular and simple way to optimize them is through an automatic tuner like "KerasTuner." When using KerasTuner, the model is trained with varying hyperparameters, which are constantly adjusted through specific algorithms. This creates the optimal conditions for the model's functionality. After successfully expanding the data points used and fine-tuning the hyperparameters, the model could fully convince. Our best model achieved a precision (accuracy) on the validation data of over 71%. Thus, this model was about 42% better than our first model - a complete success. With 71%, something else was achieved: The winning probability of the favorite of some selected bookmakers consistently fell just below 71%. Therefore, our model could achieve better values than the algorithms used by bookmakers. Of course, we then had to give the artificial intelligence a name: it was lovingly named Jarvis, after Tony Stark's artificial intelligence in the "Iron Man" films. TAKEAWAYS AND CONCLUSION This practical project shows what successes can already be achieved with simple AI models in complex markets and what dimensions can be reached through the optimization of used training data and hyperparameters. The development of Machine Learning models will accompany our agency life more intensively in the coming years - we are prepared for it.

Ein Mann prüft auf seinem Laptop den Web Vitals Bericht einer Webseite.
January 24, 2022

Core Web Vitals: A Field Guide for 2022

It is no longer an industry secret that Google uses various factors for ranking websites in its search, and one of them is the speed of the respective website. But what exactly does it mean when talking about the speed of a website, or its "PageSpeed"? In this field guide, we want to give you an insight into Google's Web Vitals, how they contribute to ranking in search and to what extent, and also provide initial tips for optimizing your website's Web Vitals. WHAT ARE CORE WEB VITALS? To optimize the Core Web Vitals, you first need to understand what they are exactly about. First: "Web Vitals" and "Core Web Vitals" are often used synonymously, even in professional articles. However, this is not entirely correct – the Core Web Vitals are a subset of the Web Vitals. The Core Web Vitals consist of only three metrics of the actual Web Vitals, which according to Google should be measured and optimized for every website. In the further course of this article, we will deal with these metrics, but also with other data from the broader Web Vitals, which we consider to be at least equally important. But back to the origin: For some time, Google had been suspected of using the loading speed of a website as a ranking factor. In 2021, Google released an update to their ranking algorithm, in which for the first time such a factor was officially mentioned. Google wanted to rank pages by "Web Vitals". A set of predefined metrics which measure, weigh, and evaluate the speed and in a narrower sense the usability of each website. The principle is easy to understand: Google has a certain idea of how a website should be structured. However, they cannot influence the content and structure of the websites that appear in the search. But they certainly can influence the order in which they are displayed. So, if you plan to reach one of the top ranks in the Google search for a hot keyword, you will have to optimize your Web Vitals. In this article, we will primarily deal with the following Web Vitals: 1. Largest Contentful Paint (LCP). This core metric evaluates how long your website takes to display the largest element in its initial viewport. The "largest element" is determined by various factors. However, Google provides tools allowing you to see what Google considers the largest element and where there is corresponding need for optimization. 2. Cumulative Layout Shift (CLS). This metric is also included in the Core Web Vitals and can be easily explained by an example you have probably encountered: You load a website you might have visited before. You know there is a button in the middle that you need to press. You try to press it during the loading process – but just can’t hit it because the website’s layout shifts during loading. Very annoying for the user, which is why Google includes it in the calculation of the performance score. 3. First Contentful Paint (FCP). Although this metric is not included in the Core Web Vitals (only in the broader category Web Vitals), we consider it one of the most common breaking points in a website's performance, which is why we need to address it as well. Here, it is evaluated how long it takes for anything (meaning something useful for the user) to be displayed on the website. This metric naturally strongly depends on Time To First Byte (TTFB) (also not included in the Core Web Vitals), which we will address briefly as well. 4. First Input Delay (FID). This metric is again included in the Core Web Vitals. It measures the time from the first interaction of the user with the website (e.g., a click on a link) until a response to this interaction occurs in the browser. FID is therefore the core metric that measures how responsive the website is to user interactions. Within the broader category Web Vitals, there are several more metrics, but they have a smaller impact on the score or rarely lead to problems, which is why we will only focus on the top four (or five) metrics in this article. SOME SIDE NOTES The above metrics and their weight within the performance score calculation are not set in stone. Google regularly updates their algorithms, including (Core) Web Vitals. These updates can lead to new necessary optimizations. Therefore, unless you work in SEO as a profession, keeping up can be challenging. A digital agency well-versed in Core Web Vitals can sustainably optimize your website. At Kuatsu, we have already invested a lot of time in the optimization of customer projects as well as our own website (more on that in a moment!). So, if you prefer to invest your time in advancing your company and leave the optimization of Core Web Vitals to a professional provider, feel free to reach out to us. Additionally, you should keep in mind that Google calculates the Web Vitals twice: once for mobile devices and once for desktops. Desktop scores are generally higher than mobile scores, as Google also performs artificial network speed throttling to slow 4G or even 3G levels while calculating the latter. However, mobile scores are actually more important than desktop scores: globally, according to a StatCounter survey,more than 57% of all page views come from mobile devices – and growing. Therefore, the optimization of your website should always primarily focus on mobile devices. You shouldn't necessarily rest on a good desktop score. WHAT ARE MY WEB VITAL SCORES? Before you start optimizing, you should of course first analyze the status quo to see where optimizations are most needed. There are several ways to do this. GOOGLE PAGESPEED INSIGHTS The simplest way to get your website's performance score is through Google PageSpeed Insights. Here, you only need to enter the URL of the page to be tested, and Google's servers take over the measurement. Note, however, that when measuring via PageSpeed Insights, not only the current measurement in a controlled environment ("Lab Data") feeds into the score, but also "Field Data". Therefore, Google also incorporates the experiences of real users who have accessed your website (e.g., via Google Chrome). Therefore, the Web Vitals cannot be tricked by simply removing all images and scripts during a controlled measurement. Google also uses real session data for the scores. You should also note that during a measurement via PageSpeed Insights, your server location may play a crucial role. Since the measurement is done on Google's servers, it may be that a very long connection to your server has to be established first, which can of course dramatically pull some scores down. PageSpeed Insights measures from one of four different locations based on your current location: 1. Northwest USA (Oregon) 2. Southwest USA (South Carolina) 3. Europe (Netherlands) 4. Asia (Taiwan) The coverage is therefore not bad, but if your server is in Australia, for example, the connection may take considerable time. Even from the Netherlands to a data center in Frankfurt, some time elapses. As you can see, you should not rely solely on PageSpeed Insights, as the measurement may not accurately reflect real user experiences. LIGHTHOUSE Lighthouse is an open-source tool provided directly by Google, which captures numerous important metrics about your website in addition to Web Vitals, including accessibility or on-page SEO. Lighthouse runs locally on your computer, but simulates a controlled environment where the metrics can be accurately measured. The best part: You often don't even need to download additional software, especially if you use Google Chrome as your browser. Lighthouse is directly integrated into the "Chrome Dev Tools", which you can access via right-click, a click on "Inspect" in the context menu, and then selecting "Lighthouse" from the top toolbar. As an example, here is a mobile performance measurement of our website using Lighthouse via the Dev Tools: OTHER WAYS There are several other ways to measure the Web Vitals, including via Google Search Console. However, these are more aimed at experienced SEOs. For beginners who want to assess their website's performance, the above-mentioned PageSpeed Insights and Lighthouse are most suitable. LARGEST CONTENTFUL PAINT (LCP): SIGNIFICANCE AND OPTIMIZATION The Largest Contentful Paint (LCP) measures how long it takes for the largest content element within the initial viewport (i.e., the area the user sees upon page load) to be fully displayed. This could be large banner images, large text blocks, videos, buttons, but also more subtle elements. According to Google, for example, the largest content element on our new website is the logo in the navigation area. But don’t worry: No treasure hunt is required to find the respective element. Both PageSpeed Insights and Lighthouse show you the element directly via a click on "LCP". As with every metric in the Web Vitals, Google has very specific expectations about how quickly this element should load. A good LCP is therefore a load time of under 2.5 seconds, while anything over 4 seconds is very bad and requires immediate action. Anything in between is acceptable but still in need of improvement. Considering the 25% weighting of this metric in the performance score, we personally see an urgent need for optimization for all values above 2.5 seconds. CHECKLIST FOR LCP OPTIMIZATION Basic LCP optimization is fortunately relatively easy to carry out. Sometimes, however, more in-depth optimizations are necessary to achieve a really good LCP score. You can find the most commonly used optimization methods here in our checklist: * Use a Content Delivery Network. If the element in question is an image, video, or similar embedded resource, the most obvious option is to reduce the loading time of that resource itself. A simple way to achieve this is to serve the resource via a Content Delivery Network (CDN). A CDN is specifically optimized to provide resources like images as quickly as possible for any user worldwide. Various load balancing techniques are used for this purpose. Combining all these techniques results in much faster load times than if the resource is served locally from your own server. It also takes the load off your own server, which is needed elsewhere. A popular CDN solution that we use on our website is Cloudinary. * Compress resources and serve them in a modern format. You will often come across this recommendation in PageSpeed Insights or Lighthouse. In principle, resources, including the LCP element, should be compressed as much as possible to generate as little data traffic as possible. There are numerous tools for images to compress them losslessly. But compression should also be enabled on the web server itself, for example via Gzip. Images should also preferably be served in a modern format like WebP or JPEG2000, as these offer a much smaller file size. However, since not every browser supports these formats, you should always have the old but as compressed as possible formats as a fallback. Try also to properly dimension raster graphics and not send a large 1000x1000px JPEG for a small logo in the navigation area to the user. * Minimize client-side JavaScript and CSS rendering. It makes little sense to load a large Google Maps library into the browser at page load when it's only needed at the end of the page. Try to minimize the JavaScript and CSS you use as much as possible and defer those needed in the loading cycle of the webpage using async and defer attributes. While JavaScript or CSS is being loaded into the browser, the Document Object Model (DOM), which loads the actual HTML structure of your website, and thus the LCP element, cannot be extended further. CUMULATIVE LAYOUT SHIFT (CLS): SIGNIFICANCE AND OPTIMIZATION The Cumulative Layout Shift (CLS) measures, simply put, the visual stability of your website during loading. To revisit the example from above: Imagine a button you want to press during the loading process, but it keeps changing position. Frustrating, right? This is exactly what an optimization of CLS aims to prevent. The CLS metric primarily reflects good usability of the website. No user wants to accidentally click on a button that leads to a subscription when they only wanted to purchase a single issue of a magazine. But how can you package this layout shift into a comparable value? For Google, this value is the product of what's called the Impact Fraction and the Distance Fraction. The Impact Fraction expresses how much an unstable element of the layout affects the viewport between two frames. Pardon? Let's take an example: Imagine an element that occupies 50% of the viewport height upon page load. Due to an unstable layout, it suddenly shifts 25% downwards. Thus, the element now has an Impact Fraction of 75% (or 0.75). The Distance Fraction is essentially the movable portion of the Impact Fraction. In our example, the Distance Fraction would be 25%, as the element has shifted by this value. If the CLS consists only of this element, we would thus have a total Impact Fraction of 0.75 and a Distance Fraction of 0.25, which multiplies to give a CLS of 0.1875. This score would be considered improvable by Google. Only a score of up to a maximum of 0.1 is considered good, while anything above 0.25 is considered bad. CHECKLIST FOR CLS OPTIMIZATION Now that we have clarified the technical details, the question remains: how can we best prevent these layout shifts? * Use placeholders. If you load a button element during the loading process via JavaScript and then insert a button above a text block, the text block will be subject to a layout shift. You should therefore use a placeholder that is ideally the same size as the button to be inserted, and then replace this placeholder with the button. This way, the text block knows where it "belongs" from the start and is no longer shifted. * Define widths and heights for images. The browser automatically creates placeholders for images and videos during loading when it knows how much space it needs to keep free. It can only do this if the respective elements are equipped with width and height specifications. * Replace web fonts with system fonts during loading. When working with web fonts, always make sure a similar system font is specified as a fallback. Not only can you thereby support older browsers that may not display web fonts, but the text will also be displayed before the respective font is loaded, avoiding layout shifts. * Avoid layout shifts in animations. When working with animations or CSS transitions, ensure that the entire layout is not shifted when the size of an element is animated. Again, you should create a wrapper element with the maximum size of the animated element so that external elements are not shifted. FIRST CONTENTFUL PAINT (FCP): SIGNIFICANCE AND OPTIMIZATION The First Contentful Paint (FCP) is closely related to the LCP and measures when the first content element on the website is displayed. Until the FCP, the website is therefore blank and unusable for the user. The FCP should naturally be far earlier than the LCP. Google indicates a value of under 1.8 seconds as good, while anything above 3 seconds is bad. Optimizing the FCP involves many factors, some of which also have direct (positive) effects on the other metrics. TIME TO FIRST BYTE (TTFB) AS A SUBFACTOR FOR THE FCP The FCP naturally depends heavily on how long the server takes to send the first byte to the user. This period is also called Time To First Byte, or TTFB. This includes things like the DNS request, which resolves the hostname (e.g., kuatsu.dev) to the server's IP address, as well as SSL handshakes. All these, however, have one thing in common: they are server-related and can only be optimized by optimizing the server. A large, cluttered database could be a reason for a long TTFB, or poor web server configurations. The best configuration is useless if the hosting provider is simply not good enough, or if the website is served from a single server on the other side of the world compared to the user. In the checklist for FCP optimization, you will also find some points that are directly related to the TTFB. CHECKLIST FOR FCP OPTIMIZATION We have recognized that the FCP is one of the most important influences on user satisfaction. But how do we optimize this metric? There are countless ways to do this, of which Google presents many in their own blog entry . We will cover a few of these in our checklist. * Minimize CSS and JavaScript. We have already learned earlier that the Document Object Model cannot be built up during the loading process of JavaScript or CSS. Therefore, it is self-evident that they must be minimized as much as possible. There are so-called "minifiers" or "uglifiers" that take your CSS and JavaScript and shrink them as small as possible using sophisticated methods. Use this option. * Remove unused CSS and JavaScript. Closely related to the last optimization option: remove CSS and JavaScript that are not needed. Here, often the disadvantage of a large WordPress theme or similar comes to light, carrying several megabytes of CSS and JavaScript that are probably not needed for your personal website. For WordPress, there are some plugins like Asset CleanUp that allow you to remove unnecessary assets as far as possible. However, with many themes, this is not always perfectly possible, which is why the best solution is still to forgo pre-made themes and instead develop your own theme or use a performance-optimized page builder like Oxygen. * Use caching. Most web servers offer many ways to enable caching. This ensures that certain resources are not regenerated with every page load, but cached for a time. Even for WordPress, there are several plugins that, in combination with the corresponding web server adjustments, can result in massive improvements in the FCP value. Every WordPress site we create includes a WP Rocket license and preconfiguration. * Use Server-Side Generation (SSG). Admittedly: Now we are reaching a point where we are no longer talking about optimization but about a completely new website. A static site generator like Next.js creates static HTML pages and handles much of the JavaScript on the server. Therefore, no dozens of API requests needs to be made in the user’s browser for displaying a blog; instead, the server makes these requests before loading and serves a finished HTML page. By the way: We also use Next.js and Static Site Generation. Please note that some of the previously mentioned optimization possibilities, especially in the area of Largest Contentful Paint (LCP), overlap with those of the FCP, and therefore are not listed again here. FIRST INPUT DELAY (FID): SIGNIFICANCE AND OPTIMIZATION The First Input Delay (FID) represents the time it takes for a user to perform an interactive action on your website. If a user clicks a button during the loading process, it takes this amount of time before a response occurs. Here, too, Google provides reference values: a time of less than 100ms is considered good, while anything over 300ms is considered bad. CHECKLIST FOR FID OPTIMIZATION The First Input Delay can prove particularly tricky or technically challenging to optimize. Often, a detailed assessment of the current website and in-depth, technical optimizations are needed to fix a poor FID. However, here are some measures that can be taken: * Relieve the main thread. Rendering in JavaScript runs on the main thread. However, if API requests and complex calculations are performed simultaneously, rendering and thus the response to a user interaction has to wait. This can be remedied by using web workers or asynchronous JavaScript. * Minimize the impact of third-party code. If you use many JavaScript libraries such as Google Analytics, Google Maps, Facebook Pixel, etc., this is reflected in your website's interactivity. Such libraries should be loaded only after the Document Object Model (DOM) is fully loaded. You can use the async or defer attributes in the script tag for this. CONCLUSION Since July 2021, Core Web Vitals have been an official part of Google's ranking and have become a homework task for every serious SEO. But you should not only pay attention to these metrics for a good position in Google search: ultimately, they are good metrics to measure and compare the real usability and user-friendliness of your website. A digital agency specializing in optimizing Web Vitals can therefore not only improve the Google ranking but also provide real added value for users.

Foto einer geöffneten App auf einem iPhone vor einem bunten Hintergrund
January 13, 2022

4 Benefits of Hybrid Apps

When developing or planning a new app, most people quickly encounter a question: Hybrid or native? Most apps are intended to run on multiple systems simultaneously, usually iOS, Android, and possibly as a web app. Therefore, it's essential to understand the advantages, disadvantages, and use cases of these two approaches. Both of these approaches serve their own purpose and are suitable for different projects. However, why we recommend developing a hybrid app (often also called cross-platform and not to be confused with WebView apps) for most use cases during client consultations is something we want to demonstrate here by highlighting four benefits of hybrid apps. A SINGLE CODEBASE The greatest advantage of hybrid apps has always been that only a single codebase needs to be developed and maintained. While in native app development, a separate app has to be programmed for each supported system, hybrid development follows a universal approach. Only a single app is written, which is then compiled into a native app to run on the common systems. The advantage is clear: Significant savings can be made both in the budget and the time needed for development. Instead of having multiple developers working on different systems, often a single development team can handle the programming for all systems at rocket speed. This advantage not only applies to the initial development but also in the future. When new features are added or the app needs other maintenance, adjustments can be made within a few working hours and are ready for use on all systems simultaneously. This not only saves costs but is also very pleasing for the users. PERFORMANCE LIKE THE BIG BROTHER Not too long ago, hybrid apps had a bad reputation. During those times, the still popular Ionic Framework emerged, which was a pioneer in its field. Unfortunately, apps programmed this way back then had a massive issue: performance. Even a layperson could easily tell that an app was not optimized for the respective system. Graphics errors, stuttering, and crashes were common. React Native, however, is far ahead of its reputation. Performance issues are now almost nonexistent. On the contrary, they achieve nearly similar speeds to purely native apps. In most cases, users can no longer tell the difference. This sets them considerably apart from so-called WebView apps, which still struggle with the same issues. WebView apps differ from hybrid apps, which are compiled and optimized into native apps because they simply display a website in a web browser wrapped in an app. Such an approach can never achieve the speed and user experience of native and hybrid apps. Furthermore, this approach has the problem that many system functionalities such as user location or push notifications cannot be used. In contrast, this is not a problem with a hybrid app. Today, it's safe to say that anything that can be developed with a native app can also be implemented with a hybrid app. GREAT USER EXPERIENCE This point goes hand in hand with the previous point. Due to the nearly native performance of hybrid apps and the ability to integrate system APIs like Apple Health, HomeKit, or similar, a fantastic user experience is created. And even if a system functionality is not natively supported by the framework used, it is possible for an experienced developer to retrofit it with relatively little effort. OFFLINE USABILITY Here, we compare less to a native app and more to the "little brother" – the WebView app. The core advantage over such is that the app can be used independently of an existing internet connection. The app, along with all necessary functionalities, is installed on the user's device and therefore does not need to reload each time it's opened. Admittedly: With clever use of caching, many web apps can now function offline (known as a PWA, or Progressive Web App). Often, however, certain features do not work without an internet connection, as they cannot access all system APIs and often require a persistent server connection. This problem does not exist with the use of hybrid apps. Moreover, developing a hybrid app is often cheaper than an offline-capable Progressive Web App. A SMALL DISCLAIMER… While we are absolutely convinced about hybrid app development, it is not suitable for all types of projects. In certain cases, it makes sense to develop native apps for each supported system. Especially when exceptional performance is crucial for the app's function, a hybrid app – despite its continuously improving performance – may not keep up.

Foto eines MacBooks mit geöffnetem Code-Editor
January 1, 2022

A Guide to TypeScript Migration

For some time now, TypeScript has been our primary language, both for internal projects and client projects. From the beginning, we have worked mainly with JavaScript (in combination with some frameworks like React and NodeJS), as the flexibility and wide application range of the language allow us to use a single codebase for an entire project, including web and mobile app as well as backend, in the same language. Not long ago, we migrated all our current projects to TypeScript. Even though every JavaScript project is technically a valid TypeScript project, as TypeScript is a superset of JavaScript, not every JavaScript project is automatically a good TypeScript project. Adding TypeScript to an existing JavaScript codebase will result in all variables, constants, and objects not derived from the immediate context for the compiler being considered of type 'any'. Clearly, this is not the intended use. TypeScript was developed with the intent to use strict types in JavaScript and largely avoid dynamic typing. As a developer, you are therefore forced to manually type many of these objects. It's logical to think twice about whether this is truly necessary and if the benefits outweigh the costs or time investment. There are now numerous tools and programs, like "ts-migrate" by Airbnb (which also uses TypeScript as their official frontend language!), which aim to simplify TypeScript migration. However, these are by no means perfect and do not eliminate the need for manual object typing. Computers are ultimately not very good at understanding the semantics of your code (although this is rapidly changing with the advent of deep learning and AI technologies like GitHub Copilot). So, we are faced with a dilemma: Migrate – or continue managing the old codebases? WHY TYPESCRIPT IS SIMPLY BETTER Before tackling the question of whether migrating existing JavaScript projects is worthwhile, let's first examine the two biggest advantages of TypeScript. * The TypeScript Compiler (TSC). In our opinion, the biggest advantage of using TypeScript lies in the compiler. JavaScript is usually an interpreted scripting language, interpreted and executed by the browser at runtime. Compilation is therefore unnecessary. TypeScript is transcompiled to JavaScript (which allows the browser to interpret the compiled code as regular JavaScript). The advantage here is mainly that TypeScript code can also be transcompiled to older JavaScript/ ECMAScript versions. So, you can use modern features like Promises even when building your applications for older browsers or environments. TypeScript handles compiling the features into equivalent structures of the targeted JavaScript version. * Error detection at build time. Naturally, interpreted, and not compiled languages, also offer advantages like faster development cycles, but they also have a significant downside. Many errors that a compiler could usually catch are only discovered at runtime. TypeScript can help here in the largest category, type errors. Every JavaScript developer has at some point accidentally tried to access attributes of undefined or null. TypeScript can prevent such errors before the application reaches the staging or even production environment. WHY AND HOW WE MIGRATED ALL (CURRENT) PROJECTS There are plenty of reasons for TypeScript migration. But is it really worth migrating even existing, sometimes huge codebases? The answer we gave ourselves to this question was a clear Yes. After having used TypeScript for some new projects and enjoying static typing in JavaScript, the decision was easy. The implementation is another story. Many of our ongoing client projects in JavaScript were several tens of thousands of lines long – so where to start? DETERMINE THE STATUS QUO. If you haven't previously worked with TypeScript "alternatives" like JSDoc, the likelihood is high that you have little understanding of which function actually uses which types. Therefore, it is wise to first create an overview of the current status. Which interfaces does the application talk to? What data is read and written? Is the source of this data type-safe? Comprehensive documentation of the code pays off at this point. ENABLE THE "STRICT" MODE. In general, TypeScript makes real sense in our opinion only with the use of "Strict" mode. Without activating it, TypeScript allows much more leeway, for example by allowing implicit 'any'. However, this is precisely what the migration is intended to prevent. We want to prohibit dynamic and non-static types, and by not using Strict Mode, we're digging ourselves into a hole. When we first migrated projects to TypeScript, we initially migrated our projects in the "Non-Strict" mode. Only afterwards did we activate Strict Mode and weed out the remaining errors. However, we quickly found out this was not the most effective method. Activate Strict Mode right at the beginning of the migration. You will save yourself a lot of headaches later on. Bug fixes and optimizations made in the first step often need to be entirely discarded after activating Strict Mode. Even if the error tab in your IDE might intimidate you: Use Strict Mode from the get-go. INSTALL DECLARATION FILES. A large portion of the errors shown by TypeScript might not actually come from your code, but from the node_modules folder. This is simply because most modules do not include type declarations by default. Fortunately, the open-source community offers suitable type declarations for almost every module. For a module "some_module", you can usually install these with $ npm install @types/some_module. However, if your application has a very specific use case and therefore uses very specific libraries, it may happen that no matching type declarations are available. Instead of simply declaring these libraries as 'any', invest the time to create type declarations. Tech debt is a real issue – and you want to avoid getting entangled in it at this stage. MIGRATE CLASSES AND FUNCTIONS. Before moving to the finer details, it's a good idea to first type the used classes, functions, and larger objects of your application. It's likely that these are used repeatedly in the code. Therefore, they serve as the first candidate for typing because the typing of other variables and objects will become much easier later on. During our migration, we initially tackled aspects like middleware, models, etc., assigning them the most precise types possible, which made it much simpler to assign and type them later in the smaller functions and sections of the code. Ensure your typing approach is as strict as necessary, but not too strict: TypeScript is a very powerful tool, and you can make your types as stringent as you like. However, "don’t over-engineer"! Overdoing it with types makes little sense from a cost-time perspective beyond a certain point. OTHER STUFF… A significant portion of the work is now done in a relatively short time. But now it's down to the details. Experience shows that it's often sufficient to give variables and co. a predefined type like "string". The well-known Don’t Over-Engineer principle applies here too. You should also adapt your tests to TypeScript. Many popular test frameworks are optimized for TypeScript and work seamlessly with it. CONCLUSION TypeScript is possibly the best thing that has happened to JavaScript in recent years. The trend, according to recent StackOverflow Surveys, is increasingly towards statically typed languages. The flexibility of JavaScript combined with secure typing makes every project future-proof and easy to manage. Since our full TypeScript migration, it has become our de-facto standard, and we wouldn't want to miss it.