Humans sketch and recognise (hand-drawn) sketches easily and seemingly without complex reasoning, even if the sketches were not as precise as what an artist would draw. Nevertheless, drawing sketches is an attentive process and a kind of “art” that needs to be learned, as the acquisition of sketch drawing and recognition abilities is not innate. In many cases, especially when spatial relations are of a central concern, sketches become more suitable than language and allow to more easily draw on one’s well developed spatial intuitions than verbal descriptions do. Drawing on these properties, sketches are used in various ways, for example, to communicate ideas, to support design processes by externalising ideas, to understand complex relations or processes, and even to support memorisation. Recognition or retrieval of sketches by computational tools, on the other hand, is generally difficult and requires long computations or simulation of complex mechanisms (e.g., spatial reasoning, matching, analogy making, abstraction, indexing, learning, etc.) that are not as intuitive as the humans’ processing for sketch production or recognition.
Automating the recognition of sketches basically differs from image processing, and is a challenging task due to various reasons, particularly because of the imprecision of drawn strokes that usually constitute such sketches. Essentially, sketches are distinct from images because they transport explicit meaning, and sketch generation requires a representational apparatus that allows to design such sketches.
Due to the success of touch interfaces as mainstream tools, cognitively inspired AI research faces the challenging task to develop human-computer interfaces (HCI) that employ the human capacity of sketch understanding as a basis for enhanced communication with machines and modern equipment. This automation also supports human sketch usage in different use cases, such as designing early prototypes, communication, and education, to mention a few.
While clearly some pictures are sketches and some are not, it is not equally clear whether some sketches are pictures and some are not. For a broad construal of sketching and sketches, a sequence of gestures, for example, may be accepted as a sketch, while clearly not a picture. On whatever way we interpret “sketch”, the worth of a sketch is determined by its quality: If a picture is worth a thousand words, does this hold also for sketches?
- How do humans conceptualise ideas via sketching, and how do they recognise salient parts of objects (sketched by others)?
- What are the main underlying cognitive mechanisms responsible for such recognition?
- Which parts of a sketch play more significant roles (in recognising a sketch and identifying it as a certain object) than other parts?
- And, more importantly, how to ultimately simulate the humans’ ability to easily recognise salient concepts in sketches, and to understand what objects are sketched, in an AI model guided by the way humans operate on sketches to perform the same tasks?
The DySket symposium aims at contributing to more deeply discussing the aforementioned topics and answering questions of the latter kind on a scientific, interdisciplinary basis. The aim of the DySket symposium is to address various cognitive and computational issues ranging from applications of computational sketch understanding and generation systems, cognitive mechanisms or constraints governing sketch understanding and generation, and general principles for evaluating the quality of generated sketches and the relation of said quality to the quality of recognition. The topics of interest address various issues related to sketching and sketch understanding, as well as gestures, scene understanding and interpretation, and the generation of image schemas to depict concepts.
Program Schedule & Form:
- The DySket symposium will be held on Tue, 27-Sept, and will spread over two sessions of the KogWis2016 conference: a session from 9:00am to 11:00am and another from 2:00pm to 4:00pm.
- Experts from interdisciplinary fields will give their perspectives to the symposium’s main topic and its many facets, while at the same time stress highly interactive forms of discussions and sharing of ideas along two fundamental issues:
- [Issue1: Sketching & Interaction] (see Talks #5, #2, and #4 below in the “Speakers & Contributions” section; the actual order of talks is assumed to be according to the program’s schedule).
- [Issue2: Spatial Concepts & Sketching] (see Talks #3, #1, and #6 below in the “Speakers & Contributions” section; the actual order of talks is assumed to be according to the program’s schedule).
- DySket will start with a short introduction, given by one of the organisers. The aim of this introduction is to initiate a reconsideration of some terms and build a common ground between the attendant researchers from the different disciplines.
- A total of six presentations follow -three on each of the sessions (see the detailed program here). The length of a presentation is approximately twenty (to twenty-five) minutes, plus five minutes for really urgent Q&A.
- The symposium is concluded by an open, general discussion, where everyone is encouraged to share their ideas (and discuss whatever remaining non-urgent questions and/or opinions).
Speakers & Contributions:
- Angela Schwering & Malumbo Chipofya (Institute for Geoinformatics, University of Muenster, Germany):
“Sketchmapia – A Framework for Recognition, Interpretation and Visualization of Sketch Maps, and Integration of Sketch Maps and Metric Maps.“
This talk will give an overview of the components of Sketchmapia and one application area to which it is being applied: community based land tenure recording. The challenges imposed by the generality of free form map sketching drive us to pursue different solutions for sketch-based user interfaces for geospatial applications. We will summarize the approaches used for sketch map recognition and alignment with metric maps.
- Stefan Schneider (Institute of Cognitive Science, University of Osnabrueck, Germany):
“Mental Object Manipulation to Generate Sketches.”
While recognition of objects seemingly goes with ease, drawing an object -that is, to depict properties or perspectives of it- requires conscious effort. Two components seem to be essential: (a) to be able to mentally manipulate objects, and (b) strategies for depicting these in 2D sketches. This talk presents results from case studies using a refined think-aloud method, where subjects acquire knowledge about geometric objects by mentally manipulating these, while solving the task to generate sketches of the constructed relations.
- Oliver Kutz (Free University of Bozen-Bolzano, Italy):
“Image Schemas, Concept Invention, and Generalisation.”
In cognitive science, image schemas are identified as fundamental patterns of cognition. They are schematic prelinguistic conceptualisations of events and serve as conceptual building blocks for concepts. We here propose that image schemas can also play an important role in computational concept invention, namely within the computational realisation of conceptual blending. We discuss the construction of a library of formalised image schemas, and illustrate how they can guide the search for a base space in the concept invention work flow. Their schematic nature is captured by the idea of organising image schemas into families. Formally, they are represented as heterogeneous, interlinked theories. In this context, we in particular discuss the problem of generalisation in connection with image schemas.
- Kirsten Bergmann (Faculty of Technology and Center of Excellence “Cognitive Interaction Technology” [CITEC], University of Bielefeld, Germany):
“Social Sketching – Depicting Gestures in Multimodal Communication.”
In spatial communication people spontaneously use depictive gesturing as a way to convey information. In this talk, we will present work on analyzing iconic gesture use in multimodal dialogue, their cognitive underpinnings in imagistic mental representations, and their use to establish common understanding of spatial information among communication partners. Based on empirical results, computational models are developed that allow for simulating and evaluating communicative speech-gesture behavior in artificial agents.
- Zoe Falomir Llansola (Spatial Cognition Center [BSCC], University of Bremen, Germany):
“Image Understanding Using Sketching and Qualitative Descriptors.”
A computational method is presented which obtains a sketch of any digital image and then applies qualitative models (of shape, colour, topology, location, direction) to describe the features of the objects involved in that sketch. These qualitative features can be translated into narratives for human-machine interaction or into description logics for agent understanding.
- Kai-Uwe Kuehnberger (Institute of Cognitive Science, University of Osnabrueck, Germany):
“The Role of Concepts in Sketch Understanding.”
- Ahmed M. H. Abdel-Fattah (Faculty of Science, Ain Shams University, Egypt).
- Haythem O. Ismail (Faculty of Engineering, Cairo University, Egypt).
- Kai-Uwe Kuehnberger (Institute of Cognitive Science, University of Osnabrueck, Germany).
The DySket symposium constitutes one of the activities supported by the German Academic Exchange Service (Deutscher Akademischer Austausch Dienst –DAAD), as part of the JeSICS research project measures (Project ID: 57247603).
For further information or inquiries, please contact one of the organising committee, or send to the project’s email dedicated to the symposium: firstname.lastname@example.org.
Last updated: 20-September-2016.