# Next Stop:

## June 19 – June 23, 2017

.

ICCC17 Accepted Papers

Yalemisew Abgaz, Diarmuid O'Donoghue, Donny Hurley, Ehtzaz Chaudhry and Jian Jun Zhang.
Abstract: Dr Inventor is a tool that aims to enhance the professional (Pro-c) creativity of researchers by suggesting novel hypotheses, arising from analogies between publications. Dr Inventor processes original research documents using a combination of lexical analysis and cognitive computation to identify novel comparisons that suggest new research hypotheses, with the objective of supporting a novel research publication. Research on analogical reasoning strongly suggests that the value of analogy-based comparisons depends primarily on the strength of the mapping (or counterpart projection) between the two analogs. An evaluation study of a number of computer generated comparisons attracted creativity ratings from a group of practising researchers. This paper explores a variety of theoretically motivated metrics operating on different conceptual spaces, identifying some weak associations with user's creativity ratings. Surprisingly, our results show that metrics focused on the mapping appear to have less relevance to creativity than metrics assessing the inferences (blended space). This paper includes a brief description of a research project currently exploring the best research hypothesis generated during this evaluation. Finally, we explore PCA as a means of specifying a combined multiple metrics from several blending spaces as a basis for detecting comparisons to enhance researchers’ creativity.
Abstract: The increasing popularity of computational creativity (CC) in recent years gives rise to the need for educational resources. This paper presents several modules that together act as a guide for developing new CC courses as well as improving existing curricula. In addition to introducing core CC concepts, we address pedagogical approaches to this interdisciplinary subject. An accessible overview of the field allows this paper to double as an introductory tutorial to computational creativity.
Wendy Aguilar and Rafael Pérez Y Pérez.
Abstract: This work describes a computational model for early cognitive development, implemented as a creative process inspired in Piaget’s and Cohen’s theory. This model is named Dev E-R (Developmental Engagement-Reflection). Here we present the results obtained when the agent implementing this model was granted the capacity of touching but not seeing the virtual world with which it was interacting, and when it could both see and touch its environment. Under five criteria we proposed (novelty, utility, emergence, motivations, and adaptation) these can be considered as the first agent’s manifestations of creative behavior.
Khalid Alnajjar, Mika Hämäläinen, Hanyang Chen and Hannu Toivonen.
Abstract: Many linguistic creativity applications rely heavily on knowledge of nouns and their properties. Such knowledge sources are scarce and limited, however. We present a graph-based approach for expanding and weighting properties of nouns, given an initial knowledge base of noun-property pairs. In this paper, we focus on famous characters, either real or fictional, and categories of people, such as actor, hero, child etc. In our case study, we started with 11--25 initial properties per noun on average, and the method found 63--132 additional properties, on average.Using an empirical evaluation we show that the expanded properties and weights are consistent with human judgement. The resulting knowledge base can be utilized in creative tasks concerning figurative language. For instance, metaphors based on famous characters can be used in various applications: including story generation, creative writing, advertising and comic generation.
Álvaro Amorin, Luís Fabrício Góes, Alysson Silva and Celso França.
Abstract: Creating culinary recipes is one of the most creative human activities. It requires combining ingredients, performing the recipe steps, creating specific diets, and others tasks.
In addition, the existence of publicly available repositories of recipes, as well as scientific advances in areas such as Food Chemistry and Neuro-Gastronomy, encourage the generation of new and pleasurable recipes from algorithms.
Although the number of ingredients allows the generation of a huge number of recipes, only a small fraction of this potential is exploited. This paper proposes, implements and analyzes a system of computational creativity called Creative Flavor Pairing which act cooperatively with different profiles of cooks assuming the responsibility of suggesting food ingredients that can generate creative recipes. The generation of creative ingredients combinations by a genetic algorithm using the Regent-Dependent Creativity (RDC) metric as a fitness function, showed in our case study that the most creative combinations are also the most popular among humans. Experimental results also showed that our system was able to suggest more creative combinations than those currently published in the largest cooking social networks.
Agnese Augello, Emanuele Cipolla, Ignazio Infantino, Giovanni Pilato, Adriano Manfrè and Filippo Vella.
Abstract: What we appreciate in dance is the ability of people to spontaneously improvise new movements and choreographies, surrendering to the music rhythm, being inspired by the current perceptions and sensations and by previous experiences, deeply stored in their memory.
Like other human abilities, this, of course, is challenging to reproduce in an artificial entity such as a robot. Recent generations of anthropomorphic robots, the so-called humanoids, however, exhibit more and more sophisticated skills and raised the interest in robotic communities to design and experiment systems devoted to automatic dance generation.
In this work, we highlight the importance to model a computational creativity behavior in dancing robots to avoid a mere execution of preprogrammed dances. In particular, we exploit a deep learning approach that allows a robot to generate in real time new dancing movements according to to the listened music.
Benjamin Bay, Paul Bodily and Dan Ventura.
Abstract: In order to promote artificial intelligence, provide resources for artistic communities, and further the linguistic capabilities of computationally creative systems, we present a computational process for creative text transformation and evaluation. Its purpose is to help solve the fundamental problems posed by the fields of natural language generation (NLG) and natural language processing (NLP): computationally writing and understanding texts. Our process entails the use of 1) constraints to guide word replacement, and 2) vector word embedding to approximate meaning. We introduce intentions as objects that drive the generation of creative artefacts; a text's desired theme, emotion, meter, or rhyme scheme may be represented via intention. Our implementation of this process is oriented around poetry and song lyrics. We provide specific details on this. The system successfully produces syntactically correct, human-voiced text. An evaluation suggests that our process successfully evokes human-recognizable sentiments, and that even familiar texts are difficult to recognize after undergoing transformation. We discuss subjects of interest for future research.
Paul Bodily, Benjamin Bay and Dan Ventura.
Abstract: In Hierarchical Bayesian program learning (HBPL) trained models for subconcepts are combined to achieve human-like results in one-shot classification, parsing, and generation of hand-written characters. We contend that the HBPL framework is well-suited for modeling creative artefacts inasmuch as it allows explicit model- ing of intention, structure, and substructure. We dis- cuss issues related to factoring joint distributions over artefact classes generally, using lyrical composition as a specific example. How joint distributions are fac- tored largely reflects the philosophical debates that oc- cur among artists themselves, suggesting that the HBPL framework might serve as a more precise scaffolding for such debates. Besides generating, the concept-learning framework naturally lends itself to broader applications, including recommendation systems.
Liam Bray, Oliver Bown and Benjamin Carey.
Abstract: In this paper we analyse three CC specific interface categories. Direct manipulation systems, Programmable interfaces and Highly encapsulated systems. We conduct a preliminary investigation into a single expert user's experience of using tools which are designed for musical composition. And we discuss the implications of encapsulation in CC specific scenarios.
Mason Bretan, Gil Weinberg and Larry Heck.
Abstract: Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. We do this by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forced-choiced ranking task. We compare our model to a note-level generative baseline that consists of a stacked LSTM trained to predict forward by one note.
João Cunha, João Gonçalves, Pedro Martins, Penousal Machado and F. Amílcar Cardoso.
Abstract: A descriptive approach for automatic generation of visual blends is presented. The implemented system, the Blender, is composed of two components: the Mapper and the Visual Blender. The approach uses structured visual representations along with sets of visual relations which describe how the elements – in which the visual representation can be decomposed – relate among each other. Our system is a hybrid blender, as the blending process starts at the Mapper conceptual level and ends at the Visual Blender (visual representation level). The experimental results show that the Blender is able to create analogies from input mental spaces and produce well-composed blends, which follow the rules imposed by its base-analogy and its relations. The resulting blends are visually interesting and some are considered unexpected.
Arne Eigenfeldt, Oliver Bown, Andrew Brown and Toby Gifford.
Abstract: A musebot is defined as a piece of software that autonomously creates music and collaborates in real time with other musebots. The specification was released early in 2015, and several developers have contributed musebots to ensembles that have been presented in North America, Australia, and Europe. This paper describes a recent code jam between the authors that resulted in four musebots co-creating a musical structure that included negotiated dynamic changes and a negotiated ending. Outcomes reported here include a demonstration of the protocol’s effectiveness across different programming environments, the establishment of a minimal set of parameters for effective musical interaction between the musebots, and strategies for coordination of episodic structure and conclusion.
Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny and Marian Mazzone.
Abstract: We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution. We argue that such networks are limited in their ability to generate creative products in their original design. We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution. We conducted experiments to compare the response of human subjects to the generated art with their response to art created by artists. The results show that human subjects could not distinguish art generated by the proposed system from art generated by contemporary artists and shown in top art fairs.
Abstract: A creative robot autonomously produces a behavior that is novel for the robot or generated through a creative reasoning process. In the current state of the art in interactive robotics, while a robot may learn a task by observing a human teacher, it usually cannot later adapt what it has learned to the context of a new environment. The differences between the original, source environment and the new, target environment lie on a spectrum of similarity and have a direct impact on the difficulty of the transfer problem. We examine a subset of transfer problems in which the robot must exhibit creative behavior in order to perform in the new environment successfully. We argue that for transfer problems in which the source and target environments are sufficiently different, creativity is necessary for successful task transfer. To address such problems, we propose the use of human-robot co-creativity as a framework for collaboration between the human teacher and the robot learner in order to address task transfer.
João Gonçalves, Pedro Martins and Amílcar Cardoso.
Abstract: This paper presents BlendVille, a computational system based on the framework of Conceptual Blending. This system seeks to implement our new ideas regarding a computational creativity system which we expect to be able to create novelty from existing knowledge. The system differs from our previous framework, Divago, in the usage of Information Theory and Simplicity Theory to create new concepts low in information discrepancy and complexity. As such, we expect its output to be simpler to interpret and attractive to the human mind. We investigate its behaviour, compare its output with Divago and report on our findings.
Kazjon Grace, Mary Lou Maher, Maryam Mohseni and Rafael Perez Y Perez.
Abstract: A concept, design or other artefact is p-creative when it is simultaneously novel and valuable for a specific individual. This is defined by contrast to h-creative artefacts, which are novel and valuable for a society as a whole. When we talk about p-creativity in computational systems we usually mean that something is creative to the system itself: the system has its own experiences and goals, and with them judges novelty and value. We propose an alternative approach aimed at simulating what a specific human user will find p-creative in order to stimulate that user towards p-creative behaviour. We define a framework for doing so, explore several domains in which it could be applied, and describe some preliminary results from a system designed to encourage students to read more broadly. We end the paper with a discussion of how such systems could generate framing narratives to better persuade users to engage with specific artefacts.
Abstract: In general, existing systems in computational creativity (CC) cannot explain why they are being creative, without ultimately referring back to their designer. Answering the "why?" would allow for the attribution of intentional agency, and likely lead to a stronger perception of creativity. We argue that this requires us to judge creative value not exclusively from a human (social) perspective, but from the perspective of the system in question. Enactive artificial intelligence (AI), a framework inspired by autopoietic enactive cognitive science, equips us with the necessary conditions for a value function to reflect a system's own intrinsic goals. We translate this framework's general claims to CC and ground a system's creative activity in the maintenance of its identity. We describe candidate principles to realise enactive AI's conditions, and thus lay the foundations for a minimal, non-anthropocentric model of intentional creative agency. We discuss crucial implications for the design and evaluation of CC, and address why human-level intentional creative agency is so hard to achieve.
Sarah Harmon.
Abstract: An author might read other written works to polish their own writing skill, just as a painter might analyze other paintings to hone their own craft. Yet, either might also visit the theatre, listen to a piece of music, or otherwise experience the world outside their particular discipline in search of creative insight. This paper explores one example of how a computational system might rely on what they have learned from analyzing another distinct form of expression to produce creative work. Specifically, the system presented here extracts semantic meaning from an input text and uses this knowledge to generate ambient music. A small case study was conducted to provide a preliminary assessment of the system's procedure and direct future work.
James Hodson.
Abstract: This paper offers a critical review of the underlying assumptions in the field of {\em Computational Creativity}. We present and integrate the state of the art in the search for machines that could be considered creative by human standards. Through the lens of existing literature, philosophical thought, and empirical experimentation, we propose ways to better understand the roots of creativity, and a new approach for its investigation within the field of Artificial Intelligence.
Daniel Johnson, Robert Keller and Nicholas Weintraut.
Abstract: We describe a neural network architecture designed to learn the musical structure of jazz melodies over chord progressions, then to create new melodies over arbitrary chord progressions from the resulting connectome (representation of neural network structure). This architecture consists of two sub-networks, the interval expert and the chord expert, each being LSTM (long short-term memory) recurrent networks. These two sub-networks jointly learn to predict a probability distribution over future notes conditioned on past notes in the melody. We describe a training procedure for the network and an implementation as part of the open-source Impro-Visor (Improvisation Advisor) application and demonstrate our method by providing generated melodies based on a variety of training sets.
Abstract: How are computers typically perceived in co-creativity scenarios? Recent research within computational creativity considers how to attribute creativity to computational agents within co-creative scenarios. Human evaluation forms a key part of such attribution or evaluation of creative contribution. The use of human opinion to evaluate computational creativity, however, runs the risk of being distorted by conscious or unconscious bias. The case study in this paper shows people are significantly less confident at evaluating the creativity of a co-creative system involving computational and human participants, compared to the (already tricky) task of evaluating individual systems in isolation. To justify computational creativity research in co-creative computational software, we need to demonstrate that - unlike creativity support tools - computational co-creative software can make an attributable, recognisable creative contribution. To progress co-creativity research, we should combine the use of co-creative computational models with the findings of computational creativity evaluation research into what makes software creative, or what makes software appear creative. Increased collaborations across these two computational creativity research topics will help us to address a key question: what needs to be demonstrated for computers to be seen as genuine partners in the creative process, making a creative contribution?
Ahmed Khalifa, Gabriella A. B. Barros and Julian Togelius.
Abstract: DeepTingle is a text prediction and classification system trained on the collected works of the renowned fantastic gay erotica author Chuck Tingle. Whereas the writing assistance tools you use everyday (in the form of predictive text, translation, grammar checking and so on) are trained on generic, purportedly neutral'' datasets, DeepTingle is trained on a very specific, internally consistent but externally arguably eccentric dataset. This allows us to foreground and confront the norms embedded in data-driven creativity and productivity assistance tools. As such tools effectively function as extensions of our cognition into technology, it is important to identify the norms they embed within themselves and, by extension, us. DeepTingle is realized as a web application based on LSTM networks and the GloVe word embedding, implemented in JavaScript with Keras-JS.
Carolyn Lamb, Daniel Brown and Charles Clarke.
Abstract: We present TwitSonnet, a Twitter found poetry system. TwitSonnet attempts to build meaningful poems based on criteria we previously identified as separating good computer-generated poems from bad ones: namely, novelty, meaning, reaction and craft. We show the results of an experiment with human raters that shows that TwitSonnet poems focusing on these criteria are not artistically superior to poems that do not. We discuss the implications of this negative result for TwitSonnet's development, and the general implication of negative experimental results on computational creativity as a field.
Simo Linkola, Anna Kantosalo, Tomi Männistö and Hannu Toivonen.
Abstract: We formulate a model of computational metacreativity. It consists of various aspects of creative self-awareness that potentially contribute, in various combinations, to the metacreative capabilities of a creative system. Our model is inspired by a psychological view of metacreativity promoting the awareness of one's thoughts during the creative process, and draws from the field of self-adaptive software systems to explicate different viewpoints of metacreativity in creative systems. The model is designed to help in analyzing metacreative capabilities of creative systems, and to guide the development of creative systems to a more autonomous and adaptive direction.
Roisin Loughran and Michael O'Neill.
Abstract: We present a review of papers presented at IJWCC and ICCC, specifically considering what applications these papers are engaged with, either directly in generative systems or indirectly in evaluation or framework proposals. The primary focus of this work was to ascertain if there are any trends in the applications considered over the years, any topics that are becoming more dominant or any that have been neglected. Our initial classification among 16 specific categories indicated that Music was the most popular application domain; when we reconsidered seven broader categories we determined that papers involving variations of language processing were most popular. We considered the trend among application domains over the past 12 years and noted that contrary to early discussions on creativity, problems based on logic, science or mathematics do not appear often. We consider the implications of this research as to what information it may convey both to the computational creativity community and to a general computer science audience.
Pedro Lucas and Carlos Martinho.
Abstract: There is untapped potential in having a computer work as a colleague with the video game level designer as a source of creative stimuli, instead of simply working as his slave. This paper presents 3Buddy, a co-creative level design tool exploring this digital peer paradigm, aimed at fostering creativity by allowing human and computer to work together in the context of level design, and describes a case study of the approach to produce content using the Legend of Grimrock 2 level editor. Suggestions are generated and iteratively evolved by multiple inter-communicating genetic algorithms guiding three different domains: innovation (exploring new directions), guidelines (respecting specific design goals) and convergence (focusing on current co-proposal). The interface allows the designer to orient the tool behaviour in the space defined by these dimensions. This paper details the inner workings of the system and presents an exploratory study showing, on the one hand, how the tool was used differently by professional and amateur level designers, and on the other hand, how the nuances of the co-creative interaction through an intention-oriented interface may be a source of positive influence for the creative level design process.
Mark J. Nelson, Swen Gaudl, Simon Colton, Edward J. Powley, Blanca Perez Ferrer, Rob Saunders, Peter Ivey and Michael Cook.
Abstract: We introduce fluidic games, a type of casual creator that blends game play and game design. Fluidic games have a core of built-in games that anchor a space of design possibilities around them, and encourage players to alternate between playing specific games and playing with the design space. Our Gamika Technology platform enables fluidic games on mobile devices, and we have thus far built three of them. In doing so we have found that even for simple games, fluidic games require computational creativity support. This takes several forms intended to keep design sessions playful and fast-moving, including automated game design used as a form of brainstorming, mixed-initiative co-creative design to ease design-space navigation, and automated game playing to evaluate game dynamics. Finally, we have exhibited this fluidic-games concept in three distinct cultural settings: a series of rapid game jams lasting 1-2 hours each, an in-progress semester-long enrichment course with a local school, and an art installation that foregrounds an autonomous version of the system exploring a fluidic game on its own, at least if the audience will allow it to do so.
Todd Pickering and Anna Jordanous.
Abstract: Predictability is the polar opposite of originality, and as such it is a notable obstacle that should be overcome in the pursuit of computational creativity. Accurately modelling a human's understanding of predictability would be a monumental task, requiring a contextually rich network of social interaction, literature, news, and media. However, by artificially instilling a computer with some basic ideas about what is predictable in a given scenario, it can begin to gain an understanding of how to subvert expectation.

This project attempts to implement such a process into a specially designed story generation system known as Chronicle, inspired by Vladimir Propp's 'Morphology of the Folk Tale'. Chronicle aims to demonstrate decision making capabilities, and utilises an autonomous self-evaluation system modelled on predictability.

Decisions made during the story generation process are based on probabilities defined by the expectations of the typical reader, and are amassed to formulate an overall predictability rating. The decision making process is autonomously manipulated by the system in order to pursue a customisable predictability target.

Chronicle was demonstrably accurate at evaluating its output in some cases, and less accurate in other cases. Further refinement is required to increase its efficacy, but it presents a promising step towards negotiating predictability in computational creativity.
Abstract: Artificial Musical Intelligence is a subject that spans a broad array of disciplines related to human cognition, social interaction, cultural understanding, and music generation. Although significant progress has been made on particular areas within this subject, the combination of these areas remains largely unexplored. In this paper, we propose an architecture that facilitates the integration of prior work on Artificial Intelligence and music, with a focus on enabling computational creativity. Specifically, our architecture represents the verbal and non-verbal communication used by human musicians using a novel multi-agent interaction model, inspired by the interactions that a jazz quartet exhibits when it performs. In addition to supporting direct communication between autonomous musicians, our architecture presents a useful step toward integrating the different subareas of Artificial Musical Intelligence.
Abstract: Recently, computational systems began approaching challenges that were previously considered to lay exclusively in the human creative domain, such as the art of storytelling and lyrics writing. In this paper, we explore combining these two art forms through the automated creation of ballads. We introduce MABLE (MexicA's BaLlad machinE), based on the plot generation system, MEXICA. Integrating both cognitive and statistical models, MABLE is the first computational system to write narrative-based lyrics. User studies demonstrate MABLE's success at creating emotionally-engaging lyrics with coherent plot.
Abstract: Movie studios have compelling reasons to love sequels. Familiar characters from successful films are valuable properties that come with large built-in audiences eager to pay for more. That such characters are commodities is beyond dispute, yet they are as much commodites for creative story-telling as for commercial film-making. Familiar characters come with pre-existing audiences and pre-existing audience expectations, and writers can exploit the latter to reduce exposition, establish mise en scène, create mood or motivate the use of genre tropes. Familiarity can also be abused for comic ends, to create narratives dense in references to other stories, worlds or genres. Post-modern irony thus abounds in stories that combine old characters in new, clever and perhaps even logically impossible ways. In this work we explore the value of a large knowledge-base of familiar characters within the plotting mechanics of the Scéalextric system, to quantify the extent to which familiarity can enhance or diminish our enjoyment of machine-crafted stories.
Abstract: Building a computationally creative system is a challenging undertaking. While such systems are beginning to proliferate, and a good number of them have been reasonablye well-documented, it may seem, especially to newcomers to the field, that each system is a bespoke design that bears little chance of revealing any general knowledge about CC system building. This paper seeks to allay this concern by presenting an abstract CC system description, or, in other words a practical, general approach for constructing CC systems.
Abstract: We introduce a new sketch based interface for generating animations. Unlike traditional digital tools, ours is parameterized entirely by a neural network with no preprogrammed rules or knowledge representations. The capability of our sketching tool to support visual exploration and communication is demonstrated within the context of facial images, though our framework is domain independent. Our recorded sketches serve not only as a means for generating a specific animation, but also a standalone visual encapsulation of an animation's semantic operation which can be reused and refined.
Kunwar Yashraj Singh, Nicholas Davis, Chih-Pin Hsiao, Ricardo Macias, Brenda Lin and Brian Magerko.
Abstract: This paper reports on a new deep machine learning architecture to classify and generate input for co-creative systems. Our approach combines the generational strengths of Variational Autoencoders with the image sharpness typically associated with Generative Adversarial Networks, thereby enabling a generative deep learning architecture for training co-creative agents called the Auxiliary Classifier Variational Autoencoder (AC-VAE). We report the experimental results of our network’s classification accuracy and generational loss on the MNIST numerical image dataset and TU-Berlin sketch data set. Results indicate our technique is effective for classifying and generating sketched object images, even with large sizes (above 64x64 pixels). We also describe how our network is particularly useful for co-creative agents since it can generate diverse concepts, as well as transform and morph user generated sketches while maintaining their concept identity.

Supported by

Search anything!