Workshop on
Multimedia
Contents in Digital Libraries
Chania, Crete, Greece,
June 2 and 3, 2003
Sponsored by DELOS and NSF
Because
of the synergy between computational advances and biological understanding we
can now begin to manipulate living systems on a level with our ability to
manipulate physical and chemical systems. This synergy will have a direct
effect on our relatively new digital technologies. The serious problems of
cultural heritage creation and preservation posed by current digital tools such
as the ephemeral nature of digital records will inevitably be supplanted by
technology influenced by the biosciences. A carbon-based solution to storage
media might radically change digital preservation strategies of emulation
and/or migration of data.
Because
this is a revolution in our understanding of the nervous system and its
environment, the revolution will work both ways - from our cultural heritage to
science as well as from science to culture heritage - as we incorporate the
application of these discoveries back into the creative process and the
technologies that support them.
For
instance, it was recently announced (February, 2003) at Simon Fraser University
that a single molecule switch has been developed that would enable computers to
have 10,000 times as much memory as current computers. This bio-chemical
approach to computational design is a good example of how rapidly
cross-disciplinary understanding will take effect in new products. In the same
month, Intel announced a new chip technology that supercharges cell phones to
access the Internet, display digital photographs, play music and perform other
complicated tasks. Cell phones in a sense are becoming communication devices,
remote controls, and multimedia displays.
The
concept of aligning and managing digital cultural heritage assets is a reaction
to the evolution of digital production and multimedia networks. The
understanding of this evolution is rooted in experiences in the library and
museum communities, the digital solutions and design disciplines, the
communication and entertainment industries and law. Aligning and managing
multimedia assets is predicated on these origins. Neuro-Media will have many
more levels of assets. One important distinction is that in a Neuro-Media
environment everything that can communicate will. Just as all the biological systems within us and all
the biological systems in nature communicate Ð and there should be a
distinction made between communicating and interacting Ð all the assets created
in Neuro-Media systems will actively exchange and employ information to create
new information, not merely react and store reactions. This aspect of
Neuro-Media will radically change the quality and density of multimedia
created. In addition to
Òcross-talk Ò data, there will need to be attention to assets created and
maintained in scientific support systems, in artistic and social support
systems, in collaborative science and art production tools, in
cross-disciplinary content development, in automated metaphorical dependent
taxonomy development, in multiple user destination designs, in science/art
audience interactions and communications, a variety of international legal
monitoring, and multicultural ethical debate. Programming paradigms may come
from areas of current research like pervasive computing, non-silicon based
research into quantum computing, DNA computing, biological computing, molecular
memory, and advances in nanotechnology to name just a few.
The
confluence of knowledge coming from neuroscience, cognitive science, genetics,
psychology, biology, nanotechnology engineering, computing, software
development, networks, and advances in the mathematics supporting them is
having a powerful impact not only on science but on the Humanities, Social
Sciences, Design, the Performing and Fine Arts, and the preservation of
Cultural Heritage. This change has been augmented by the application of imaging
technologies that are producing a new understanding of the nervous system that
is opening up new scientific understanding of human behavior while giving
artists and designers a range of tools and modes of conceptualization never
known before.
Questions
raised by Neuro-Media will be far reaching. Some examples of current
production, archiving, and efforts to establish international standards are
relevant as preparation for these questions.
Art Center and CalTech
In
just two years, the Graduate Industrial Design Program at the Art Center
College of Design in Pasadena, California has transformed itself from a traditional
industrial design program that would usually be focusing on the creation of
furniture, appliances, and architectural components into a new kind of program
with courses like ÒPretty Inside: Designing for the Interior of the BodyÓ that
involves collaboration with the nanotechnology researchers of the Jet
Propulsion Laboratory at CalTech. Fashion design students for instance, were
suddenly confronted with deep issues in creating for the human body. Art Center
instructor and designer Stacie London creates objects that capture a person's
self-image. She generates a custom
form developed from a collaborative investigation in the geometry of organic
shapes and textures her client finds an identity with. Using rapid prototyping technology, she
creates the digital image of the object and the physical realization of the
computer file, resulting in a CAD/CAM object. With the "PERSONALfile" as the
underlying form, London generates a suite of personal objects, ranging in size
from microscopic to monumental Ð from cell phones to furniture. This process
presents the possibility of creating an identity, based on a 3-D object stored
as a computer file, which can be updated to match a person's evolving taste and
sense of identity. The "PERSONALfile" serves as both personal object
and electronic persona. This approach is in direct response to ÒgrowingÓ an
evolving form that is continuously linked to the person using it. It is a
bio-computing approach to style.
Opened in April 2003 at
CalTech and Art Center, ÒNEUROÓ is an exhibition that exposes the future
problems of cultural preservation. Six artists were paired with six scientific
researchers to create works that push boundaries in both the arts and sciences.
Ken Goldberg, a conceptual artist and professor of engineering at UC Berkeley
was affected by the events of 9/11 and wanted to understand how two cultures
view the same event completely differently. He began working with Pietro
Perona, the director of the CalTech Center for Neuromorphic Systems (The Center
for Neuromorphic Systems Engineering at CalTech focuses on sensory systems for
robotics) on his ÒInfiltrateÓ project that involves using scientific research
on how flies see, adapting the research to how fish see, filling a 110
gallon-tank with six koi Ð one orange and five white Ð and using advanced
hardware and software and three cameras to track what the orange fish sees,
reconstruct it and project in on a wall. Visitors figure out the correlation
between movement in the tank and movement on the wall Ð and get a sense of
seeing like an orange koi.
In
another piece for NEURO artist Jessica Brown has created ÒPerpetual Perceptual
(about a rose)Ó, that uses "lightsticks" to emit words categorized by
sense Ñ sight, sound, taste, smell and touch, that describe a rose which is
absent from the space. Perpetual Perceptual (about a rose) employs a phenomenon
referred to as "retinal painting," whereby viewers are most likely to
see an image when they are not looking directly at its source. Retinal painting
became the conceptual premise for Bronson's project, playing upon the idea of
peripheral vision as well as ways of looking at art. The artist deliberately
chose a site within the gallery that is peripheral to the main exhibition
space, and also visible to people walking along an exterior glass walled
corridor adjacent to the gallery.
Christian
MšllerÕs work CHEESE is an experiment in the architecture of sincerity.
Inspired by the omnipresent friendly, smiling faces of Hollywood's
entertainment industry this work is based on the research on "emotion
recognition" by the Machine Perception Laboratories of the University of
California, San Diego. More than 800 young actresses answered a small ad in an
entertainment industry trade magazine: "Looking for actress, news anchor
type, for a series of video portraits." CHEESE is a human-computer
interaction in which the computer takes the dominant position. On camera, six
actresses each try to hold a smile for as long as they can, up to one and half
hours. Each ongoing smile is scrutinized by the computer perception system and
whenever the display of happiness falls below a certain threshold, an alarm
alerted them to show more sincerity. Displayed in the gallery on six flat panel
monitors, sequenced adjacent to each other along the wall, the piece creates a
concert of alert signals within an ambience of forced friendliness and
irritating melancholy.
Multimedia cultural
heritage preservation is a dynamic, evolving endeavor. Cumulative cultural
repositories for new media, multimedia, future-media are conditions that
provide exceptional test beds. ÒCumulativeÓ in this context means that an
organization or institution accepts, on an ongoing basis, cutting edge
applications of art and technology. The business of the organization is to
promote, distribute, display, and archive the latest efforts of artists and
social activists to use information technologies for expression.
LA Freewaves is a nonprofit media arts network that
produces the largest independent biennial media arts festival in the United
States as well as ongoing workshops, curriculum materials and a Web site that
encourages artistic and social expression. Since 1989, this innovative organization has exhibited dynamic
and culturally relevant experimental multimedia from around the world at
alternative and established venues throughout Los Angeles by coordinating
activities with 65 arts organizations, 68 schools, 74 libraries, 32 cable
stations, 35 programmers, and more than 2000 video makers. LA Freewaves is now
contemplating merging with two other alternative media organizations that have
satellite access in order to broadcast its festival and archival holdings
internationally.
LA Freewaves multimedia archive holds hundreds of
hours of video in a variety of formats, hundreds of Web sites, CD ROMs and
DVDÕs, editions of print catalogues, curriculum guides, festival ephemera like
posters, administrative and publicity databases, correspondence, and design
materials. Since 1989 the formats from have gone from installation video
performances to interactive multimedia, to Internet, to DVD, to any combination
of these formats. With an interest in satellite broadcast as well as evolving
forms like those in the NEURO Exhibition, LA Freewaves continues to keep pace
with expanding definitions of new media. The development of the archive is
itself a kind of work of art that will survive as an example of how multimedia
has evolved.
What confronts LA Freewaves now is not how to get
access to the latest, most innovative uses of multimedia, but how to establish
a sustainable mechanism that can cope with the acceleration of changes in
multimedia while creating an effective strategy for preserving its
acquisitions. The cumulative multimedia
inventory that LA Freewaves represents must keep pace with artists, must
preserve its inventory, find new ways to expose its inventory, and must deal
with an increasingly complex, next-generation technologies because artists are
always at the leading edge of cultural experimentation.
There are a variety of
projects attempting to deal with the issues of preserving electronic records.
As Manager of Communications for the Getty Information Institute I interacted
with many of them including a Getty sponsored meeting in 1998 called Time
and Bits: Managing Digital Continuity
that brought a number of interesting minds together (Stewart Brand, Danny
Hillis, Brian Eno, Brewster Kaele, Kevin Kelly, Jaron Lanier, Doug Carlston and
others) to try an unravel the issues of preserving electronic records. The
conclusions of the meeting were startling at the time. Everyone using digital
technology assumed that someone else Ð industry, the government - was taking
care of preservation. The reality was that, although it was discussed
academically, no one seemed to be doing it. The fear was that the turn of the
century would be seen historically as a dark time when a great deal of
electronic information would be lost because the mediums used to record it
would not last more than twenty-five years at best. Since that time the Library
of Congress and others have taken up the issues in regard to the current
ephemeral nature of digital materials. Strategies for migration, emulation, and
the archiving of hardware as well as software seem to have the problems at bay.
As an arts researcher and
participant in InterPARES 2 (International Research on Permanent Authentic
Records in Electronic Systems I have become aware of host of new issues. Launched at the University of British ColumbiaÕs
School of Library, Archival and Information Studies from April 1994 to March
1997, InterPARES 1 defined the requirements for creating, handling, and
preserving reliable and authentic alphanumeric active electronic records. InterPARES
2 is a multidisciplinary international collaboration that is applying a
multi-method approach to the development of concepts, processes and tools that
will help in the securing of a protected and lasting environment for the
digital records produced in interactive, dynamic and experiential systems in
the course of artistic, scientific and electronic government activities.
What has become apparent to
me after attending a recent workshop meeting in Vancouver is that the long-term
issues of ÒauthenticÓ multimedia records is vastly more complex than the
current efforts to define models for preservation. The definition of multimedia
is headed for Neuro-Media. This condition will produce an environmental
approach to records management, an ecological model that will be much closer to
modeling the nervous system than current modes of data modeling. One of the
most revealing areas where this becomes apparent is legal evidentiary
requirements. The network of interconnected legal areas include: social access
in terms of freedom of information legislation, privacy legislation,
intellectual property like copyrights, patents, trademarks, technological
control (longevity), expression including freedom of speech, press, conscience
and security; processes like the Paperwork Reduction Act, e-government commerce
regulations, ISO 900, records retention, licensing; technical standards like
ISO, digital signatures, digital rights management, standards like PDF,
emergent technologies like molecular computing; contract relationships with
sovereign powers; and finally economic issues like insurance regulations.
Neuro-Media will raise the
level of concerns not only in the legal and ethical domains but far beyond into
the way information is allowed to communicate with other information and what
new forms emerge from those communications. What happens, for instance, if a
network of Neuro-Media based
Internet sites are continually exchanging data and generating new music,
conceptual art, architectural designs, social models, or entertainment options?
Who owns the results? Who preserves the cultural artifacts? What constitutes an
artifact?
Neuro-Media Information
Science Questions
Rather than a traditional
ÒconclusionÓ to this paper I would prefer to end with questions. The answers to these questions may indicate a new
approach to Òinformation scienceÓ as it is applied to cultural heritage
production and preservation. The
tools for discovery and cultural heritage preservation will change very
rapidly. ÒDigitalÓ technology as we now know it may only be a brief
intermediary technology that is a step toward a more stable, potentially carbon
based form whoÕs ÒmaterialityÓ may be vastly different than the silicon/plastic
base we are now familiar with.
1. What areas of current research in Neuro-Media will have
the most effect on changing how we strategize cultural heritage preservation
technologies?
2. What are the prospects for synthetic cultural memory
in this domain?
3. What methods and structures being created for current
digital technologies will be most useful in Neuro-Media?
4. What is an authentic ÒNeuro-MediaÓ cultural heritage
record?
5. What are the prospects for cultural continuity in this
kind of information domain?
6. What kind of models based on the understanding of the
nervous system will be useful for preserving cultural heritage information?
7. What is the ecology of Neuro-Media?
8. What sort of programming environments now being
considered for non-silicon computing will be useful for cultural heritage
preservation purposes?
9. What sort of cultural heritage organizations,
institutions, and businesses will arise from Neuro-Media development?
10. How will the museum, design studio, and the science
laboratory be joined?
11. What kind of products and services will be created
that will impact long-term cultural heritage preservation?
12. Is Neuro-Media legal in terms of current models of
intellectual property?
13. What is the time-line for Neuro-Media impact?
Reference
Information Arts:
Intersections of Art, Science, and Technology, Stephen Wilson, from Leonardo books, MIT Press. January 15, 2002
The Future of CPUs in
Brief, David Essex, Technology
Review, MIT Enterprise, January 28, 2002
Time and Bits: Managing
Digital Continuity, Margaret MacLean,
Ben Davis, Eds., J. Paul Getty Trust Publications, 2000.
When Everything Learns, Ben Davis, Craig Kanarick, Eds. Razorfish PDF
Publications, 2001, (http://www.digitaleverything.com/wheneverythinglearns2.pdf)
Liars, Lovers, and Heroes:
What the New Brain Science Reveals About How We Become Who We Are -- Steven R. Quartz, Terrence J. Sejnowski, William
Morrow Publishing, 2002.
LA FREEWAVES (http://www.freewaves.org)
InterPARES (http://www.interpares.org)
Ben Howell DAVIS
Consultant, Davis International Associates, Los Angeles, CA.
Senior Scientist, Strategic Development Director, Media and Entertainment, Razorfish, Inc., (2000-2001). Manager, Electronic
Publications, Getty Trust Publications, J. Paul Getty Trust, Los Angeles, CA., (1999-2000). Board of Directors, Virtual Heritage Network, Program Manager, Communications, Getty Information Institute, J. Paul Getty Trust, (1995-1999).
Research Associate, MIT Center for Educational Computing Initiatives (CECI). Manager, MIT CECI AthenaMuse Software
Consortium and Multimedia Application Development (1991-1994); Manager, MIT/Project Athena Visual Computing Group (1987-1991);
Fellow, MIT Center for
Advanced Visual Studies
(1983-84); Instructor, MIT
Media-Lab, Visible
Language Workshop (1988); Chair, Research Consortium of the National
Demonstration Laboratory for Interactive Educational Technology at the
Smithsonian Institution (1988-1990); Instructor, Aspects of Visualization and
Formal Analysis of Media,
Teachers College,
Columbia University, Department of Communication, Computing, and Technology
(1988-89); Lecturer, MIT
Visual Arts Program,
Department of Architecture and Urban Planning (1990); Chair, Department of
Electronic Imagery, Atlanta
College of Art, Atlanta,
GA (1975-1986); Master of
Fine Arts, Florida State University (1975); Bachelor of Science, Communications, the University of Florida (1970).