Data as a material for fashion: How treating data as a material enables a new future for design

Data as a material for fashion

From the measurements used to make garments to the metrics of how fashion is sold and worn, fashion is rich in data. In fashion, this data is often treated using an artistic approach. The size, shape, colour, line, silhouette (how a garment appears on a specific body) are created using an implicit sense developed by all those who make, sell and wear fashion. A new perspective on fashion has, however, made these implicit aspects of fashion increasingly explicit. This shift is enabled by recent emerging technologies such as visual recognition, virtual simulation, digital fabrication and electronic textiles. We need a new way of looking at data to interface with the everyday practices of fashion. In the Fashion Research and Technology group at the Amsterdam University of Applied Sciences (AUAS), we are treating data as if it were a material just like cotton, wool or viscose. Fashion Design for years has been based on the “feeling” of a specialized individual who had attuned themselves to artistic whims, yet adding data to the process allows for certain parts of the implicit process of fashion to be made explicit.  We clearly understand that data is not actually a material, but taking inspiration from the book The Stuff of Bits by Paul Dourish we understand that data has a “materiality”.  Fashion designers are able to work with the material in new ways and have a unique take on design. The Netherlands has been a worldwide leader in research on how technology, data and algorithms can be used in fashion. How this knowledge is translated into the everyday activities of fashion is a new frontier. Not only do we believe that data science can provide new insights into fashion, we also believe that the particularities of fashion can tell us something about data science

Fashion is a complex adaptive system

Fashion is a complex adaptive system (CAS) that nearly every human on the planet engages in daily, intentionally or not. Like the swarming of swallows or the movements of trading markets, fashion takes on ephemeral forms: they disappear as soon as they develop. Inside the complexity there are trends that emerge with greater or lesser intensity. Famous (and infamous) design brands affect the formation (and failure) of trends that form the system of fashion. Societal acceptance of these trends depends upon many factors, as individuals seek to integrate and differentiate themselves from others. People who make fashion are often thought of as spending their time at parties, bars and shopping centers; yet it is in these places that they are studying the CAS that is fashion so that they can create disturbances and perturbations within the system. Using data as a material provides an opportunity for understanding this and many of the other mechanisms of the fashion system. An example of how we work with fashion as data is our project, Style AI’s, which currently runs in collaboration with the Applied AI Minor at the AUAS. In Style AI’s, a user uploads a photo of an outfit and tags the image with a description of the style. They are then presented with three AI’s evaluation of the style. The project has already shown that users are more skeptical of the AI when they are presented with more than one opinion using different training sets. Moreover, the computer scientists receive data about how different fashion style terms hold different meanings in different geographic locations. “Chic” in Amsterdam does not mean the same thing as “chic” in Paris. The data is being shared with designers at Amsterdam Fashion Institute (AMFI) to help them understand the trend and style process in different cities, adding data to their design process. We are looking forward to testing the app over the next few years to see how fashion changes over time as well.

Wearing, analyzing, designing, making

Recent research has shown how data can be used as a material to make shoes, and how those shoes can make data to make even more shoes. The process of fashion is a deep cycle involving large numbers of stakeholders. Moreover, fashion is complicated because the form of the attire is modified by the form of the person wearing it. The form-language intended by the fashion designer meets the form-language of the individual wearer. The resulting fusion is a symbiotic process that can range from sublime to outright scary. Yet data as a material through the lifetime of the things we wear opens new doors to understanding and describing fashion. To understand this better we are making 3D printed shoes that are generated purely from code. 3D printing offers finite control over internal (infill) structures that not only allows personalization of the shoe to the foot but also the creation of mechanical sensors that “break” after use. This causes a relief pattern to occur on the side of the sole of the shoe that can be scanned using lidar enabled cell phones. Mixing different kinds of 3D printed (or hopefully knitted) mechanical structures allows us to see the number of steps, how hard the foot is being applied and what kinds of movements the foot is making. We are building the shoes to create data for machine learning algorithms to build an understanding of  complex movements of the foot over time. We are confident that this will inform shoemakers everywhere as to how personalized shoes should be made for an individual. In the spring/summer 2021 semester we ran a blockchain project that creates infrastructure to collect this data. 

Fashion is like physics

Studying fashion in many ways is like physics (although physicists would probably disagree). Fashion has a theoretical side that expresses itself primarily in illustrations and art. Fashion also has an applied side that expresses itself in shoes, shirts and handbags. Each side loves to dislike the other. There is an art and a science to fashion that those who study it must learn to balance. At the same time, fashion changes depending on when you look at it. It is impossible to predict if a fashion trend will be alive or dead until you open the box (very often Instagram) and look at it. What remains important now is that by encouraging those of us involved in fashion to use data as a material, we can open up new and fascinating insights for fashion and data science. In the Fashion Research and Technology group we are confident that the application of machine learning to fashion will help all of us as we engage in fashion via our clothing every day. We understand that the deeper motivating factors in fashion only become apparent when using complex adaptive systems theory with machine learning techniques. Not only do we see new insights in the generative process of form, but in the analysis, design and manufacture of the multitude of objects that are in our closets and on our bodies. Moreover, we see exciting new opportunities for extending our understanding of data as a material to the wearer (user) so that they are able to better understand the impact that they have on themselves, community, city and world. Troy Nachtigall, Professor of Fashion Research & Technology, presented his inaugural lecture on Tuesday, September 14, 2021 at 4 p.m. (CET). In this lecture he explained how his research group contributes to research, education, and practice. Watch Troy’s inaugural lecture.

Making the sum greater than the parts – FAIR Data

The difference in datasets

Every dataset can be different, not only in terms of content, but in how the data is collected, structured and displayed. For example, how national image archives store and annotate their data is not necessarily how meteorologists store their weather data, nor how forensic experts store information on potential suspects. The problem occurs when researchers from one field need to use a dataset from a different field. The disparity in datasets is not conducive to the re-use of (multiple) datasets in new contexts. The FAIR data principles provide guidelines to improve the Findability, Accessibility, Interoperability, and Reuse of digital assets. The emphasis is placed on the ability of computational systems to find, access, interoperate, and reuse data with no or minimal human intervention. Launched at a Lorentz workshop in Leiden in 2014, the principles quickly became endorsed and adopted by a broad range of stakeholders (e.g. European Commission, G7, G20) and have been cited widely since their publication in 2016 [1]. The FAIR principles are agnostic of any specific technological implementation, which has contributed to their broad adoption and endorsement.

Why do we need datasets that can be used in new contexts?

Ensuring that data sources can be (re)used in many different contexts can lead to unexpected results. For example, combining mental depression data with weather data can establish a correlation between mental states and weather conditions. The original data resources were not created with this reuse in mind, however, applying FAIR principles to these datasets makes this analysis possible.

FAIRness in the current crisis

A pressing example of the importance of FAIR data is the current COVID-19 pandemic. Many patients worldwide have been admitted to hospitals and intensive care units. While global efforts are moving towards effective treatments and a COVID-19 vaccine, there is still an urgent need to combine all the available data. This includes information from distributed multimodal patient datasets that are stored at local hospitals in many different, and often unstructured, formats. Learning about the disease and its stages, and which drugs may or may not be effective, requires combining many data resources, including SARS-CoV-2 genomics data, relevant scientific literature, imaging data, and various biomedical and molecular data repositories. One of the issues that needs to be addressed is combining privacy-sensitive patient information with open viral data at the patient level, where these datasets typically reside in very different repositories (often hospital bound) without easily mappable identifiers. This underscores the need for federated and local data solutions, which lie at the heart of the FAIR principles. Examples of concerted efforts to build an infrastructure of FAIR data to combat COVID-19 and future virus outbreaks are in the VODAN initiative [2], the COVID-19 data portal organised by the European Bioinformatics Institute and the ELIXIR network [3].

FAIR data in Amsterdam

Many scientific and commercial applications require the combination of multiple sources of data for analysis. While providing a digital infrastructure and (financial) incentives are required for data owners to share their data, we will only be able to unlock the full potential of existing data archives when we are also able to find the datasets needed and use the data within them. The FAIR data principles allow us to better describe individual datasets and allow easier re-use in many diverse applications beyond the sciences for which they were originally developed. Amsterdam provides fertile ground for finding partners with appropriate expertise for developing both digital and hardware infrastructures.

References

  1. M.D. Wilkinson, M. Dumontier, I.J. Aalbersberg, G. Appleton, M. Axton, A. Baak, … & B. Mons. The FAIR guiding principles for scientific data management and stewardship. Scientific Data 3(2016), Article No.160018. doi: 10.1038/sdata.2016.18
  2. https://www.go-fair.org/implementation-networks/overview/vodan/
  3. https://www.covid19dataportal.org/