Logo

    ddc:004

    Explore " ddc:004" with insightful episodes like "Exponential Lower Bounds for Solving Infinitary Payoff Games and Linear Programs", "Supporting IT Service Fault Recovery with an Automated Planning Method", "Designing Usable and Secure Authentication Mechanisms for Public Spaces", "Architekturkonzepte für interorganisationales Fehlermanagement" and "Bayesian Regularization and Model Choice in Structured Additive Regression" from podcasts like ""Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02", "Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02", "Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02", "Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02" and "Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02"" and more!

    Episodes (41)

    Exponential Lower Bounds for Solving Infinitary Payoff Games and Linear Programs

    Exponential Lower Bounds for Solving Infinitary Payoff Games and Linear Programs
    Parity games form an intriguing family of infinitary payoff games whose solution is equivalent to the solution of important problems in automatic verification and automata theory. They also form a very natural subclass of mean and discounted payoff games, which in turn are very natural subclasses of turn-based stochastic payoff games. From a theoretical point of view, solving these games is one of the few problems that belong to the complexity class NP intersect coNP, and even more interestingly, solving has been shown to belong to UP intersect coUP, and also to PLS. It is a major open problem whether these game families can be solved in deterministic polynomial time. Policy iteration is one of the most important algorithmic schemes for solving infinitary payoff games. It is parameterized by an improvement rule that determines how to proceed in the iteration from one policy to the next. It is a major open problem whether there is an improvement rule that results in a polynomial time algorithm for solving one of the considered game classes. Linear programming is one of the most important computational problems studied by researchers in computer science, mathematics and operations research. Perhaps more articles and books are written about linear programming than on all other computational problems combined. The simplex and the dual-simplex algorithms are among the most widely used algorithms for solving linear programs in practice. Simplex algorithms for solving linear programs are closely related to policy iteration algorithms. Like policy iteration, the simplex algorithm is parameterized by a pivoting rule that describes how to proceed from one basic feasible solution in the linear program to the next. It is a major open problem whether there is a pivoting rule that results in a (strongly) polynomial time algorithm for solving linear programs. We contribute to both the policy iteration and the simplex algorithm by proving exponential lower bounds for several improvement resp. pivoting rules. For every considered improvement rule, we start by building 2-player parity games on which the respective policy iteration algorithm performs an exponential number of iterations. We then transform these 2-player games into 1-player Markov decision processes ii which correspond almost immediately to concrete linear programs on which the respective simplex algorithm requires the same number of iterations. Additionally, we show how to transfer the lower bound results to more expressive game classes like payoff and turn-based stochastic games. Particularly, we prove exponential lower bounds for the deterministic switch all and switch best improvement rules for solving games, for which no non-trivial lower bounds have been known since the introduction of Howard’s policy iteration algorithm in 1960. Moreover, we prove exponential lower bounds for the two most natural and most studied randomized pivoting rules suggested to date, namely the random facet and random edge rules for solving games and linear programs, for which no non-trivial lower bounds have been known for several decades. Furthermore, we prove an exponential lower bound for the switch half randomized improvement rule for solving games, which is considered to be the most important multi-switching randomized rule. Finally, we prove an exponential lower bound for the most natural and famous history-based pivoting rule due to Zadeh for solving games and linear programs, which has been an open problem for thirty years. Last but not least, we prove exponential lower bounds for two other classes of algorithms that solve parity games, namely for the model checking algorithm due to Stevens and Stirling and for the recursive algorithm by Zielonka.

    Supporting IT Service Fault Recovery with an Automated Planning Method

    Supporting IT Service Fault Recovery with an Automated Planning Method
    Despite advances in software and hardware technologies, faults are still inevitable in a highly-dependent, human-engineered and administrated IT environment. Given the critical role of IT services today, it is imperative that faults, having once occurred, have to be dealt with eciently and eeffectively to avoid or reduce the actual losses. Nevertheless, the complexities of current IT services, e.g., with regard to their scales, heterogeneity and highly dynamic infrastructures, make the recovery operation a challenging task for operators. Such complexities will eventually outgrow the human capability to manage them. Such diculty is augmented by the fact that there are few well-devised methods available to support fault recovery. To tackle this issue, this thesis aims at providing a computer-aided approach to assist operators with fault recovery planning and, consequently, to increase the eciency of recovery activities.We propose a generic framework based on the automated planning theory to generate plans for recoveries of IT services. At the heart of the framework is a planning component. Assisted by the other participants in the framework, the planning component aggregates the relevant information and computes recovery steps accordingly. The main idea behind the planning component is to sustain the planning operations with automated planning techniques, which is one of the research fields of articial intelligence. Provided with a general planning model, we show theoretically that the service fault recovery problem can be indeed solved by automated planning techniques. The relationship between a planning problem and a fault recovery problem is shown by means of reduction between these problems. After an extensive investigation, we choose a planning paradigm that based on Hierarchical Task Networks (HTN) as the guideline for the design of our main planning algorithm called H2MAP. To sustain the operation of the planner, a set of components revolving around the planning component is provided. These components are responsible for tasks such as translation between dierent knowledge formats, persistent storage of planning knowledge and communication with external systems. To ensure extendibility in our design, we apply dierent design patterns for the components. We sketch and discuss the technical aspects of implementations of the core components. Finally, as proof of the concept, the framework is instantiated to two distinguishing application scenarios.

    Designing Usable and Secure Authentication Mechanisms for Public Spaces

    Designing Usable and Secure Authentication Mechanisms for Public Spaces
    Usable and secure authentication is a research field that approaches different challenges related to authentication, including security, from a human-computer interaction perspective. That is, work in this field tries to overcome security, memorability and performance problems that are related to the interaction with an authentication mechanism. More and more services that require authentication, like ticket vending machines or automated teller machines (ATMs), take place in a public setting, in which security threats are more inherent than in other settings. In this work, we approach the problem of usable and secure authentication for public spaces. The key result of the work reported here is a set of well-founded criteria for the systematic evaluation of authentication mechanisms. These criteria are justified by two different types of investigation, which are on the one hand prototypical examples of authentication mechanisms with improved usability and security, and on the other hand empirical studies of security-related behavior in public spaces. So this work can be structured in three steps: Firstly, we present five authentication mechanisms that were designed to overcome the main weaknesses of related work which we identified using a newly created categorization of authentication mechanisms for public spaces. The systems were evaluated in detail and showed encouraging results for future use. This and the negative sides and problems that we encountered with these systems helped us to gain diverse insights on the design and evaluation process of such systems in general. It showed that the development process of authentication mechanisms for public spaces needs to be improved to create better results. Along with this, it provided insights on why related work is difficult to compare to each other. Keeping this in mind, first criteria were identified that can fill these holes and improve design and evaluation of authentication mechanisms, with a focus on the public setting. Furthermore, a series of work was performed to gain insights on factors influencing the quality of authentication mechanisms and to define a catalog of criteria that can be used to support creating such systems. It includes a long-term study of different PIN-entry systems as well as two field studies and field interviews on real world ATM-use. With this, we could refine the previous criteria and define additional criteria, many of them related to human factors. For instance, we showed that social issues, like trust, can highly affect the security of an authentication mechanism. We used these results to define a catalog of seven criteria. Besides their definition, we provide information on how applying them influences the design, implementation and evaluation of a the development process, and more specifically, how adherence improves authentication in general. A comparison of two authentication mechanisms for public spaces shows that a system that fulfills the criteria outperforms a system with less compliance. We could also show that compliance not only improves the authentication mechanisms themselves, it also allows for detailed comparisons between different systems.

    Keyword-Based Querying for the Social Semantic Web

    Keyword-Based Querying for the Social Semantic Web
    Enabling non-experts to publish data on the web is an important achievement of the social web and one of the primary goals of the social semantic web. Making the data easily accessible in turn has received only little attention, which is problematic from the point of view of incentives: users are likely to be less motivated to participate in the creation of content if the use of this content is mostly reserved to experts. Querying in semantic wikis, for example, is typically realized in terms of full text search over the textual content and a web query language such as SPARQL for the annotations. This approach has two shortcomings that limit the extent to which data can be leveraged by users: combined queries over content and annotations are not possible, and users either are restricted to expressing their query intent using simple but vague keyword queries or have to learn a complex web query language. The work presented in this dissertation investigates a more suitable form of querying for semantic wikis that consolidates two seemingly conflicting characteristics of query languages, ease of use and expressiveness. This work was carried out in the context of the semantic wiki KiWi, but the underlying ideas apply more generally to the social semantic and social web. We begin by defining a simple modular conceptual model for the KiWi wiki that enables rich and expressive knowledge representation. A component of this model are structured tags, an annotation formalism that is simple yet flexible and expressive, and aims at bridging the gap between atomic tags and RDF. The viability of the approach is confirmed by a user study, which finds that structured tags are suitable for quickly annotating evolving knowledge and are perceived well by the users. The main contribution of this dissertation is the design and implementation of KWQL, a query language for semantic wikis. KWQL combines keyword search and web querying to enable querying that scales with user experience and information need: basic queries are easy to express; as the search criteria become more complex, more expertise is needed to formulate the corresponding query. A novel aspect of KWQL is that it combines both paradigms in a bottom-up fashion. It treats neither of the two as an extension to the other, but instead integrates both in one framework. The language allows for rich combined queries of full text, metadata, document structure, and informal to formal semantic annotations. KWilt, the KWQL query engine, provides the full expressive power of first-order queries, but at the same time can evaluate basic queries at almost the speed of the underlying search engine. KWQL is accompanied by the visual query language visKWQL, and an editor that displays both the textual and visual form of the current query and reflects changes to either representation in the other. A user study shows that participants quickly learn to construct KWQL and visKWQL queries, even when given only a short introduction. KWQL allows users to sift the wealth of structure and annotations in an information system for relevant data. If relevant data constitutes a substantial fraction of all data, ranking becomes important. To this end, we propose PEST, a novel ranking method that propagates relevance among structurally related or similarly annotated data. Extensive experiments, including a user study on a real life wiki, show that pest improves the quality of the ranking over a range of existing ranking approaches.

    Easing the Creation Process of Mobile Applications for Non-Technical Users

    Easing the Creation Process of Mobile Applications for Non-Technical Users
    In this day and age, the mobile phone is becoming one of the most indispensable personal computing device. People no longer use it just for communication (i.e. calling, sending messages) but also for other aspects of their lives as well. Because of this rise in demand for different and innovative applications, mobile companies (i.e. mobile handset manufacturers and mobile network providers) and organizations have realized the power of collaborative software development and have changed their business strategy. Instead of hiring specific organizations to do programming, they are now opening up their APIs and tools to allow ordinary people create their own mobile applications either for personal use or for profit. However, the problem with this approach is that there are people who might have nice ideas of their own but do not possess the technical expertise in order to create applications implementing these ideas. The goal of this research is to find ways to simplify the creation of mobile applications for non-technical people by applying model-driven software development particularly domain-specific modeling combined with techniques from the field of human-computer interaction (HCI) particularly iterative, user-centered system design. As proof of concept, we concentrate on the development of applications in the domain of mHealth and use the Android Framework as the target platform for code generation. The iterative user-centered design and development of the front-end tool which is called the Mobia Modeler, led us to eventually create a tool that features a configurable-component based design and integrated modeless environment to simplify the different development tasks of end-users. The Mobia models feature both constructs specialized for specific domains (e.g. sensor component, special component ), and also those that are applicable to any type of domain (e.g. structure component, basic component ). In order to accommodate different needs of end-users, a clear separation between the front-end tools (i.e. Mobia Modeler ) and the underlying code generator (i.e. Mobia Processor ) is recommended as long as there is a consistent model in between, that serves as a bridge between the different tools.

    Generalised Interaction Mining: Probabilistic, Statistical and Vectorised Methods in High Dimensional or Uncertain Databases

    Generalised Interaction Mining: Probabilistic, Statistical and Vectorised Methods in High Dimensional or Uncertain Databases
    Knowledge Discovery in Databases (KDD) is the non-trivial process of identifying valid, novel, useful and ultimately understandable patterns in data. The core step of the KDD process is the application of Data Mining (DM) algorithms to efficiently find interesting patterns in large databases. This thesis concerns itself with three inter-related themes: Generalised interaction and rule mining; the incorporation of statistics into novel data mining approaches; and probabilistic frequent pattern mining in uncertain databases. An interaction describes an effect that variables have -- or appear to have -- on each other. Interaction mining is the process of mining structures on variables describing their interaction patterns -- usually represented as sets, graphs or rules. Interactions may be complex, represent both positive and negative relationships, and the presence of interactions can influence another interaction or variable in interesting ways. Finding interactions is useful in domains ranging from social network analysis, marketing, the sciences, e-commerce, to statistics and finance. Many data mining tasks may be considered as mining interactions, such as clustering; frequent itemset mining; association rule mining; classification rules; graph mining; flock mining; etc. Interaction mining problems can have very different semantics, pattern definitions, interestingness measures and data types. Solving a wide range of interaction mining problems at the abstract level, and doing so efficiently -- ideally more efficiently than with specialised approaches, is a challenging problem. This thesis introduces and solves the Generalised Interaction Mining (GIM) and Generalised Rule Mining (GRM) problems. GIM and GRM use an efficient and intuitive computational model based purely on vector valued functions. The semantics of the interactions, their interestingness measures and the type of data considered are flexible components of vectorised frameworks. By separating the semantics of a problem from the algorithm used to mine it, the frameworks allow both to vary independently of each other. This makes it easier to develop new methods by focusing purely on a problem's semantics and removing the burden of designing an efficient algorithm. By encoding interactions as vectors in the space (or a sub-space) of samples, they provide an intuitive geometric interpretation that inspires novel methods. By operating in time linear in the number of interesting interactions that need to be examined, the GIM and GRM algorithms are optimal. The use of GRM or GIM provides efficient solutions to a range of problems in this thesis, including graph mining, counting based methods, itemset mining, clique mining, a clustering problem, complex pattern mining, negative pattern mining, solving an optimisation problem, spatial data mining, probabilistic itemset mining, probabilistic association rule mining, feature selection and generation, classification and multiplication rule mining. Data mining is a hypothesis generating endeavour, examining large databases for patterns suggesting novel and useful knowledge to the user. Since the database is a sample, the patterns found should describe hypotheses about the underlying process generating the data. In searching for these patterns, a DM algorithm makes additional hypothesis when it prunes the search space. Natural questions to ask then, are: "Does the algorithm find patterns that are statistically significant?" and "Did the algorithm make significant decisions during its search?". Such questions address the quality of patterns found though data mining and the confidence that a user can have in utilising them. Finally, statistics has a range of useful tools and measures that are applicable in data mining. In this context, this thesis incorporates statistical techniques -- in particular, non-parametric significance tests and correlation -- directly into novel data mining approaches. This idea is applied to statistically significant and relatively class correlated rule based classification of imbalanced data sets; significant frequent itemset mining; mining complex correlation structures between variables for feature selection; mining correlated multiplication rules for interaction mining and feature generation; and conjunctive correlation rules for classification. The application of GIM or GRM to these problems lead to efficient and intuitive solutions. Frequent itemset mining (FIM) is a fundamental problem in data mining. While it is usually assumed that the items occurring in a transaction are known for certain, in many applications the data is inherently noisy or probabilistic; such as adding noise in privacy preserving data mining applications, aggregation or grouping of records leading to estimated purchase probabilities, and databases capturing naturally uncertain phenomena. The consideration of existential uncertainty of item(sets) makes traditional techniques inapplicable. Prior to the work in this thesis, itemsets were mined if their expected support is high. This returns only an estimate, ignores the probability distribution of support, provides no confidence in the results, and can lead to scenarios where itemsets are labeled frequent even if they are more likely to be infrequent. Clearly, this is undesirable. This thesis proposes and solves the Probabilistic Frequent Itemset Mining (PFIM) problem, where itemsets are considered interesting if the probability that they are frequent is high. The problem is solved under the possible worlds model and a proposed probabilistic framework for PFIM. Novel and efficient methods are developed for computing an itemset's exact support probability distribution and frequentness probability, using the Poisson binomial recurrence, generating functions, or a Normal approximation. Incremental methods are proposed to answer queries such as finding the top-k probabilistic frequent itemsets. A number of specialised PFIM algorithms are developed, with each being more efficient than the last: ProApriori is the first solution to PFIM and is based on candidate generation and testing. ProFP-Growth is the first probabilistic FP-Growth type algorithm and uses a proposed probabilistic frequent pattern tree (Pro-FPTree) to avoid candidate generation. Finally, the application of GIM leads to GIM-PFIM; the fastest known algorithm for solving the PFIM problem. It achieves orders of magnitude improvements in space and time usage, and leads to an intuitive subspace and probability-vector based interpretation of PFIM.

    Aspect-Oriented State Machines

    Aspect-Oriented State Machines
    UML state machines are a widely used language for modeling software behavior. They are considered to be simple and intuitively comprehensible, and are hence one of the most popular languages for modeling reactive components. However, this seeming ease to use vanishes rapidly as soon as the complexity of the system to model increases. In fact, even state machines modeling ``almost trivial'' behavior may get rather hard to understand and error-prone. In particular, synchronization of parallel regions and history-based features are often difficult to model in UML state machines. We therefore propose High-Level Aspect (HiLA), a new, aspect-oriented extension of UML state machines, which can improve the modularity, thus the comprehensibility and reusability of UML state machines considerably. Aspects are used to define additional or alternative system behaviors at certain ``interesting'' points of time in the execution of the state machine, and achieve a high degree of separation of concerns. The distinguishing feature of HiLA w.r.t. other approaches of aspect-oriented state machines is that HiLA aspects are defined on a high, i.e. semantic level as opposed to a low, i.e. syntactic level. This semantic approach makes \HiLA aspects often simpler and better comprehensible than aspects of syntactic approaches. The contributions of this thesis include 1) the abstract and the concrete syntax of HiLA, 2) the weaving algorithms showing how the (additional or alternative) behaviors, separately modeled in aspects, are composed with the base state machine, giving the complete behavior of the system, 3) a formal semantics for HiLA aspects to define how the aspects are activated and (after the execution) left. We also discuss what conflicts between HiLA aspects are possible and how to detect them. The practical applicability of HiLA is shown in a case study of a crisis management system.

    Analyse von IT-Anwendungen mittels Zeitvariation

    Analyse von IT-Anwendungen mittels Zeitvariation
    Performanzprobleme treten in der Praxis von IT-Anwendungen häufig auf, trotz steigender Hardwareleistung und verschiedenster Ansätze zur Entwicklung performanter Software im Softwarelebenszyklus. Modellbasierte Performanzanalysen ermöglichen auf Basis von Entwurfsartefakten eine Prävention von Performanzproblemen. Bei bestehenden oder teilweise implementierten IT-Anwendungen wird versucht, durch Hardwareskalierung oder Optimierung des Codes Performanzprobleme zu beheben. Beide Ansätze haben Nachteile: modellbasierte Ansätze werden durch die benötigte hohe Expertise nicht generell genutzt, die nachträgliche Optimierung ist ein unsystematischer und unkoordinierter Prozess. Diese Dissertation schlägt einen neuen Ansatz zur Performanzanalyse für eine nachfolgende Optimierung vor. Mittels eines Experiments werden Performanzwechselwirkungen in der IT-Anwendung identifiziert. Basis des Experiments, das Analyseinstrumentarium, ist eine zielgerichtete, zeitliche Variation von Start-, Endzeitpunkt oder Laufzeitdauer von Abläufen der IT-Anwendung. Diese Herangehensweise ist automatisierbar und kann strukturiert und ohne hohen Lernaufwand im Softwareentwicklungsprozess angewandt werden. Mittels der Turingmaschine wird bewiesen, dass durch die zeitliche Variation des Analyseinstrumentariums die Korrektheit von sequentiellen Berechnung beibehalten wird. Dies wird auf nebenläufige Systeme mittels der parallelen Registermaschine erweitert und diskutiert. Mit diesem praxisnahen Maschinenmodell wird dargelegt, dass die entdeckten Wirkzusammenhänge des Analyseinstrumentariums Optimierungskandidaten identifizieren. Eine spezielle Experimentierumgebung, in der die Abläufe eines Systems, bestehend aus Software und Hardware, programmierbar variiert werden können, wird mittels einer Virtualisierungslösung realisiert. Techniken zur Nutzung des Analyseinstrumentariums durch eine Instrumentierung werden angegeben. Eine Methode zur Ermittlung von Mindestanforderungen von IT-Anwendungen an die Hardware wird präsentiert und mittels der Experimentierumgebung anhand von zwei Szenarios und dem Android Betriebssystem exemplifiziert. Verschiedene Verfahren, um aus den Beobachtungen des Experiments die Optimierungskandidaten des Systems zu eruieren, werden vorgestellt, klassifiziert und evaluiert. Die Identifikation von Optimierungskandidaten und -potenzial wird an Illustrationsszenarios und mehreren großen IT-Anwendungen mittels dieser Methoden praktisch demonstriert. Als konsequente Erweiterung wird auf Basis des Analyseinstrumentariums eine Testmethode zum Validieren eines Systems gegenüber nicht deterministisch reproduzierbaren Fehlern, die auf Grund mangelnder Synchronisationsmechanismen (z.B. Races) oder zeitlicher Abläufe entstehen (z.B. Heisenbugs, alterungsbedingte Fehler), angegeben.

    Visual Analysis of In-Car Communication Networks

    Visual Analysis of In-Car Communication Networks
    Analyzing, understanding and working with complex systems and large datasets has become a familiar challenge in the information era. The explosion of data worldwide affects nearly every part of society, particularly the science, engineering, health, and financial domains. Looking, for instance at the automotive industry, engineers are confronted with the enormously increased complexity of vehicle electronics. Over the years, a large number of advanced functions, such as ACC (adaptive cruise control), rear seat entertainment systems or automatic start/stop engines, has been integrated into the vehicle. Thereby, the functions have been more and more distributed over the vehicle, leading to the introduction of several communication networks. Overlooking all relevant data facets, understanding dependencies, analyzing the flow of messages and tracking down problems in these networks has become a major challenge for automotive engineers. Promising approaches to overcome information overload and to provide insight into complex data are Information Visualization (InfoVis) and Visual Analytics (VA). Over the last decades, these research communities spent much effort on developing new methods to help users obtain insight into complex data. However, few of these solutions have yet reached end users, and moving research into practice remains one of the great challenges in visual data analysis. This situation is particularly true for large company settings, where very little is known about additional challenges, obstacles and requirements in InfoVis/VA development and evaluation. Users have to be better integrated into our research processes in terms of adequate requirements analysis, understanding practices and challenges, developing well-directed, user-centered technologies and evaluating their value within a realistic context. This dissertation explores a novel InfoVis/VA application area, namely in-car communication networks, and demonstrates how information visualization methods and techniques can help engineers to work with and better understand these networks. Based on a three-year internship with a large automotive company and the close cooperation with domain experts, I grounded a profound understanding of specific challenges, requirements and obstacles for InfoVis/VA application in this area and learned that “designing with not for the people” is highly important for successful solutions. The three main contributions of this dissertation are: (1) An empirical analysis of current working practices of automotive engineers and the derivation of specific design requirements for InfoVis/VA tools; (2) the successful application and evaluation of nine prototypes, including the deployment of five systems; and (3) based on the three-year experience, a set of recommendations for developing and evaluating InfoVis systems in large company settings. I present ethnographic studies with more than 150 automotive engineers. These studies helped us to understand currently used tools, the underlying data, tasks as well as user groups and to categorize the field into application sub-domains. Based on these findings, we propose implications and recommendations for designing tools to support current practices of automotive network engineers with InfoVis/VA technologies. I also present nine InfoVis design studies that we built and evaluated with automotive domain experts and use them to systematically explore the design space of applying InfoVis to in-car communication networks. Each prototype was developed in a user-centered, participatory process, respectively with a focus on a specific sub-domain of target users with specific data and tasks. Experimental results from studies with real users are presented, that show that the visualization prototypes can improve the engineers’ work in terms of working efficiency, better understanding and novel insights. Based on lessons learned from repeatedly designing and evaluating our tools together with domain experts at a large automotive company, I discuss challenges and present recommendations for deploying and evaluating VA/InfoVis tools in large company settings. I hope that these recommendations can guide other InfoVis researchers and practitioners in similar projects by providing them with new insights, such as the necessity for close integration with current tools and given processes, distributed knowledge and high degree of specialization, and the importance of addressing prevailing mental models and time restrictions. In general, I think that large company settings are a promising and fruitful field for novel InfoVis applications and expect our recommendations to be useful tools for other researchers and tool designers.

    Automated Amortised Analysis

    Automated Amortised Analysis
    Steffen Jost researched a novel static program analysis that automatically infers formally guaranteed upper bounds on the use of compositional quantitative resources. The technique is based on the manual amortised complexity analysis. Inference is achieved through a type system annotated with linear constraints. Any solution to the collected constraints yields the coefficients of a formula, that expresses an upper bound on the resource consumption of a program through the sizes of its various inputs. The main result is the formal soundness proof of the proposed analysis for a functional language. The strictly evaluated language features higher-order types, full mutual recursion, nested data types, suspension of evaluation, and can deal with aliased data. The presentation focuses on heap space bounds. Extensions allowing the inference of bounds on stack space usage and worst-case execution time are demonstrated for several realistic program examples. These bounds were inferred by the created generic implementation of the technique. The implementation is highly efficient, and solves even large examples within seconds.

    Exploratory Browsing

    Exploratory Browsing
    In recent years the digital media has influenced many areas of our life. The transition from analogue to digital has substantially changed our ways of dealing with media collections. Today‟s interfaces for managing digital media mainly offer fixed linear models corresponding to the underlying technical concepts (folders, events, albums, etc.), or the metaphors borrowed from the analogue counterparts (e.g., stacks, film rolls). However, people‟s mental interpretations of their media collections often go beyond the scope of linear scan. Besides explicit search with specific goals, current interfaces can not sufficiently support the explorative and often non-linear behavior. This dissertation presents an exploration of interface design to enhance the browsing experience with media collections. The main outcome of this thesis is a new model of Exploratory Browsing to guide the design of interfaces to support the full range of browsing activities, especially the Exploratory Browsing. We define Exploratory Browsing as the behavior when the user is uncertain about her or his targets and needs to discover areas of interest (exploratory), in which she or he can explore in detail and possibly find some acceptable items (browsing). According to the browsing objectives, we group browsing activities into three categories: Search Browsing, General Purpose Browsing and Serendipitous Browsing. In the context of this thesis, Exploratory Browsing refers to the latter two browsing activities, which goes beyond explicit search with specific objectives. We systematically explore the design space of interfaces to support the Exploratory Browsing experience. Applying the methodology of User-Centered Design, we develop eight prototypes, covering two main usage contexts of browsing with personal collections and in online communities. The main studied media types are photographs and music. The main contribution of this thesis lies in deepening the understanding of how people‟s exploratory behavior has an impact on the interface design. This thesis contributes to the field of interface design for media collections in several aspects. With the goal to inform the interface design to support the Exploratory Browsing experience with media collections, we present a model of Exploratory Browsing, covering the full range of exploratory activities around media collections. We investigate this model in different usage contexts and develop eight prototypes. The substantial implications gathered during the development and evaluation of these prototypes inform the further refinement of our model: We uncover the underlying transitional relations between browsing activities and discover several stimulators to encourage a fluid and effective activity transition. Based on this model, we propose a catalogue of general interface characteristics, and employ this catalogue as criteria to analyze the effectiveness of our prototypes. We also present several general suggestions for designing interfaces for media collections.

    Suchen, finden – glauben?

    Suchen, finden – glauben?
    Dreimal täglich googeln: In Deutschland recherchieren mehr als achtzig Prozent der Internetnutzer regelmäßig in Suchmaschinen oder Webkatalogen – unabhängig von Alter, Geschlecht und Bildung. Das Internet etabliert sich für weite Kreise der Bevölkerung zum Recherchemedium Nummer eins. Private Blogs, Foren, Wikis und Newsportale – die Glaubwürdigkeit der Informationen im Web variiert erheblich. Suchmaschinen spiegeln diese Heterogenität wider. Andreas Tremel beschäftigt sich mit der Frage, wie die Nutzer mit unterschiedlich glaubwürdigen Fundstücken in Suchmaschinen umgehen. Befragt man Nutzer zur Bedeutung der Glaubwürdigkeit von Informationen im Web, ist diese offenbar das Selektionskriterium. Dennoch scheinen Nutzer naiv, wenig aufgeklärt und oberflächlich im Umgang mit Suchergebnissen. Um das tatsächliche Verhalten abhängig von der Treffer-Glaubwürdigkeit erstmalig beobachten zu können, wurde eine Suchmaschinensimulation entwickelt, die eine experimentelle Manipulation im Rahmen einer Feldstudie (n=400) ermöglichte. Vor dem Hintergrund der Glaubwürdigkeit wurden auch die Nutzung von Keyword-Werbung und Einflüsse des Recherche-Involvements untersucht.

    Analyse und Erweiterung von Vorlesungsaufzeichnungen der UnterrichtsMitschau aus der Perspektive der gemäßigt konstruktivistischen Lerntheorie

    Analyse und Erweiterung von Vorlesungsaufzeichnungen der UnterrichtsMitschau aus der Perspektive der gemäßigt konstruktivistischen Lerntheorie
    ’Wie kann eine Anwendung zum Lernen mit Vorlesungsaufzeichnungen so gestaltet werden, dass sie den Wissenserwerb möglichst optimal unterstützt?’. Dies war die zentrale Frage dieser Diplomarbeit, zu deren Beantwortung, aufbauend auf ein aktuelles System zur Bereitstellung von aufgezeichneten Vorlesungen, eine neue prototypische Lernanwendung implementiert wurde. Dazu wurden die Entwicklungsmöglichkeiten des Systems der ’UnterrichtsMitschau’ an der LMU München entsprechend der gemäßigt konstruktivistischen Lerntheorie herausgearbeitet. Um die Akzeptanz einer Anwendung bei den studentischen Nutzern zu gewährleisten, wurden deren Wünsche und Ideen mit Hilfe einer Fokusgruppendiskussion ermittelt und in das entworfene Konzept einbezogen. Auf dieser Grundlage wurde eine Anwendung entwickelt, deren zentrale Neuerungen das Hinzufügen von Annotationen und die Möglichkeit zum kooperativen Lernen in zwei unterschiedlichen Modi sind. Im ersten Kooperationsmodus tauschen sich die Lernenden asynchron, also zeitversetzt mit Hilfe von Annotationen über die Vorlesungsinhalte aus. Im zweiten, dem ’synchronen kooperativen Modus’ stehen die Lernenden über eine Audioverbindung direkt miteinander in Kontakt und bearbeiten die Vorlesungsaufzeichnung synchron. Eine nachgelagerte Studie mit 15 potenziellen Nutzern zeigte unter anderem, dass beide kooperativen Modi des neuen Systems im Vergleich zur bisherigen Anwendung besser bewertet wurden. Unter anderem sahen die Nutzer die Prozessmerkmale des Lernens aus der gemäßigt konstruktivistischen Lerntheorie stärker unterstützt. Des weiteren würden die Testpersonen die neue Anwendung eher im Studium einsetzen als die bisherige.
    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io