A Survey of Evaluation Techniques and Systems for Answer Set Programming

Prof. Francesco Ricca | May 3, 2019 | 11:00 | S.1.42

Abstract:

Answer set programming (ASP) is a prominent knowledge representation and reasoning paradigm that found both industrial and scientific applications. The success of ASP is due to the combination of two factors: a rich modeling language and the availability of efficient ASP implementations. In this talk we trace the history of ASP systems, describing the key evaluation techniques and their implementation in actual tools.

CV:

Francesco Ricca (www.mat.unical.it/ricca) is currently an Associate Professor at the Department of Mathematics and Computer Science of the University of Calabria, Italy. In the same Department he is Coordinator of the Computer Science Courses Council.
He received his Laurea Degree in Computer Science Engineering (2002) and a PhD in Computer Science and Mathematics (2006) from the University of Calabria, Italy, and received the Habilitation for Full Professor in Computer Science (INF/01) in 2017.
He is interested in declarative logic-based languages, consistent query answering, and rule-based reasoning on ontologies and in particular on the issues concerning their practical applications: system design and implementation, and development tools.
He is co-author of more than 100 (peer-reviewed) publications including international research journals (30+), encyclopedia chapters, conference proceedings, and workshops of national and international importance. He has served in program committees of international conference and workshop, such as IJCAI, AAAI, KR, ICLP, LPNMR and JELIA, and has been reviewer for AIJ, JAIR, TPLP, JLC, etc. He is Area Editor of Association for Logic Programming newsletters, and member of the Executive Board of the Italian Association for Artificial Intelligence.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für A Survey of Evaluation Techniques and Systems for Answer Set Programming

Artificial Intelligence (AI) in media applications and services

Dr.-Ing. Christian Keimel | 9.5.2019 | 10:00 | S.1.42

Abstract: Artificial Intelligence (AI) is nowadays used frequently in many application domains. Although sometimes considered only as an afterthought in the public discussion compared to other domains such as health, transportation, and manufacturing, the media domain is also transformed by AI enabling new opportunities, from content creation e.g. “robojournalism” and individualised content to optimisation of the content production and distribution. Underlaying many of these new opportunities is the use of AI in its current reincarnation as deep learning for understanding the audio-visual content by extracting structured information from the unstructured data, the audio-visual content.

In this talk the current understanding and trends of AI will therefore be discussed, what can be done, what is done, and what challenges remain in the use of AI especially in the context of media applications and services. The talk is not so much focused on the details and fundamentals of deep learning, but rather on a practical perspective on how recent advances in this field can be utilised in use-cases in the media domain, especially with respect to audio-visual content and in the broadcasting domain.

Bio: Christian Keimel received his B.Sc and Dipl.-Ing.(Univ.) in information technology from the Technical University of Munich (TUM) in 2005 and 2007, respectively. In 2014 he received a Dr.-Ing. degree from TUM for his dissertation on the “Design of video quality metrics with multi-way data analysis.” Since 2013 he is with the Institut for Rundfunktechnik (IRT), the research and competence centre of the public service broadcasters of Austria, Germany, and Switzerland, where he leads the machine learning team, working on the applications of machine learning and AI in the broadcasting context. In addition, he is a lecturer at TUM for “Deep Learning for Multimedia”. His current research interests include applications of data-driven models using machine learning particularly deep learning for audio-visual content understanding and distribution optimisation.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Artificial Intelligence (AI) in media applications and services

Towards 6DoF Adaptive Streaming Through Point Cloud Compression

Jeroen van der Hooft | 25.03.2019 | 16:00 | S.2.42

Abstract: The increasing popularity of head-mounted devices and 360-degree video cameras allows content providers to provide virtual reality video streaming over the Internet, using a relevant representation of the immersive content combined with traditional streaming techniques. While this approach allows the user to look around and move in three dimensions, the user’s location is fixed by the camera’s position within the scene. Recently, an increased interest has been shown for free movement within immersive scenes, referred to as six degrees of freedom (6DoF). One way to realize this, is by capturing one or multiple objects through a number of cameras positioned in different angles, creating a point cloud object which consists of the location and RGB color of a significant number of points in the three-dimensional space. While the concept of point clouds has been around for over two decades, it recently received increased attention by MPEG, issuing a call for proposals for point cloud compression. As a result, dynamic point cloud objects can now be compressed to bit rates in the order of 3 to 55 Mb/s, allowing feasible delivery over today’s mobile networks. In this talk, we use MPEG’s dataset to generate different scenes consisting of multiple point cloud objects, and propose a number of rate adaptation heuristics which use information on the user’s position and focus, the available bandwidth and the buffer status to decide upon the most appropriate quality representation of each of the considered objects. Through an extensive evaluation, we discuss the advantages and drawbacks of each solution. We argue that the optimal solution depends on the considered scene and camera path, which opens interesting possibilities for future work.

Bio: Jeroen van der Hooft obtained his M.Sc. degree in Computer Science Engineering from Ghent University, Belgium, in July 2014. In August of that year, he joined the Department of Information Technology at the same university, where he is currently active as a Ph.D. student. His main research interests include the end-to-end Quality of Experience optimization in adaptive video streaming, and low-latency delivery of immersive video content. During the first months of 2019, he worked as a visiting researcher in the Institute of Information Technology at the University of Klagenfurt, where he focused on rate adaptation for volumetric media streaming.

Web sitehttps://users.ugent.be/~jvdrhoof/

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Towards 6DoF Adaptive Streaming Through Point Cloud Compression

Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können

Dr. Pat Treusch | Do., 11.04.2019 | 18 Uhr | Stiftungssaal

Inhalt (Entwurf): In ihrem Vortrag „Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können“ spricht Patricia Treusch über die Zusammenarbeit von Mensch und Maschine. Aufbauend auf ihrer aktuellen Forschung beleuchtet sie Mensch-Maschine-Verhältnisse, Automatisierung von Arbeit sowie den Körper/Geist Split im Zusammenhang mit Artificial Intelligence. Am Beispiel Stricken diskutiert sie die Interaktionsverhältnisse zwischen Mensch und Roboter und stellt Formen der feministischen-kritischen Intervention in aktuelle Praktiken des Engineering und der Robotik vor.

© Felix Noak

Dr. phil./PhD Pat Treusch hat am Zentrum für Interdisziplinäre Frauen- und Geschlechterforschung (ZIFG) und dem Tema Genus, Universität Linköping, Schweden zu dem Thema „Robotic Companionship“ binational promoviert (Cotutelle-Verfahren). Von August 2015—Februar 2018 hat sie als wissenschaftliche Mitarbeiterin am ZIFG das Projektlabor „Wie Wissenschaft Wissen schafft. Verantwortlich Handeln in Natur- und Technikwissenschaften“ im Rahmen des MINTgrün Orientierungsstudiums (TUB) durchgeführt.

Im Berliner Verbundprogramm „DiGiTal – Digitalisierung: Gestaltung und Transformation“ setzt Pat Treusch am Fachgebiet Allgemeine und Historische Erziehungswissenschaft und am ZIFG, TU Berlin ihr Postdoc-Projekt „Das vernetzte Selbst. Eine feministisch-interdisziplinäre Studie zur Veränderung von Lernkulturen durch Digitalisierungsprozesse im Zeitalter des Internets der Dinge (IoT)“ um. Das Projekt analysiert empirisch beobachtbare Herausforderungen »unserer« Lernkulturen, die sich ergeben, wenn Alltagstechnologien anfangen zu lernen. Smart-Home-Geräte sind nur ein aktuelles Beispiel solch intelligenter Alltagstechnologien des IoT, an denen neuartige Mensch-Maschine-Schnittstellen entstehen. Diese versprechen – im Kern – eine Vernetzung aller Lebensbereiche. Das Projekt geht davon aus, dass den entstehenden Schnittstellen eine Qualität inhärent ist, die »uns« zu mehr herausfordert, als eine Medienkompetenz 4.0 zu entwickeln. Sich zwischen der feministischen Technik- und Wissenschaftssoziologie mit Fokus auf Mensch-Maschine Verhältnissen und der feministischen Erziehungswissenschaft mit Fokus auf Lerntheorien verortend, untersucht das Projekt explorativ, inwiefern aktuelle Lernumgebungen der Digitalisierung durch neue Verschränkungen von maschinellem und menschlichem Lernen gekennzeichnet sind. Das bedeutet auch, die Verhältnissetzungen zwischen Kognition und Lernen, im speziellen zwischen Computer und Kognition, in unterschiedlichen Wissens- und Technikfeldern der Digitalisierung nachzuzeichnen. Dem folgend zielt das Projekt darauf ab, sich verändernde, digitalisierte Bedingungen »unseres« Selbst- und Weltbezugs zu erfassen. Nicht zuletzt beinhaltet dies, intelligente Alltagstechnologien daraufhin zu befragen, ob und wie grundlegende symbolische Ordnungsschemata der Gesellschaft – etwa Gender, Sexuality, Race, Class oder Ableism – neu verhandelt werden (könnten).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Knitting together/Living together: Was wir vom Stricken mit Robotern lernen können

Review: Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems [Slides]

The review of the TEWI colloquium of Priv.-Doz. Dr. Rick Rabiser from February 7, 2019 comprises the slides (below):

Abstract

Complex software-intensive systems are often described as systems of systems (SoS) due to their heterogeneous architectural elements. As SoS behavior is often only understandable during operation, runtime monitoring is needed to detect deviations from requirements. Today, while diverse monitoring approaches exist, most do not provide what is needed to monitor SoS, e.g., support for dynamically defining and deploying diverse checks across multiple systems. In this talk, I will describe our experiences of developing, applying, and evolving an approach for monitoring an SoS in the domain of industrial automation software, that is based on a domain-specific language (DSL). I will first describe our initial approach to dynamically define and check constraints in SoS at runtime, including a demo of our monitoring tool REMINDS, and then motivate and describe its evolution based on requirements elicited in an industry collaboration project. I will furthermore describe solutions we have developed to support the evolution of our approach, i.e., a code generation approach and a framework to automate testing the DSL after changes. We evaluated the expressiveness and scalability of our new DSL-based approach using an industrial SoS. At the end of the talk, I will also present general lessons we learned and give an overview of other projects in the area of software monitoring as well as other areas such as software product lines, that I am currently involved in.

Bio

Rick Rabiser (https://mevss.jku.at/rabiser) is currently a senior researcher at the Christian Doppler Laboratory for Monitoring and Evolution of Very-Large-Scale Software Systems (VLSS) at Johannes Kepler University Linz, Austria. In this lab, he heads the research module on requirements-based monitoring and diagnosis in VLSS evolution, with Primetals Technologies Austria as industry partner. He holds a Master’s and a Ph.D. degree in Business Informatics as well as the venia docendi (Habilitation) in Practical Computer Science from Johannes Kepler University Linz. His research interests include but are not limited to variability management, software maintenance and evolution, systems and software product lines, automated software engineering, requirements engineering, requirements monitoring, and usability and user interface design. Dr. Rabiser co-authored over 120 (peer-reviewed) publications; served in 80+ program committees and 25+ conference and workshop organization committees; and frequently reviews articles for several international journals like IEEE TSE, IEEE TSC, ACM CSUR, EMSE, JSS, and IST. He is also a member of the steering committee of the Euromicro SEAA conference series and a member of the Euromicro Board of Directors (Director for Austria) and the Euromicro Executive Office (Publicity Secretary). He is also an elected member of the steering committee of the International Systems and Software Product Line Conference (SPLC). He currently is the speaker of computer scientists at JKU Linz, who are not full professors (Fachbereichssprecher Mittelbau Informatik).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems [Slides]

Review: Random Matrix Theory in Array Signal Processing: Application Examples [Slides]

The review of the TEWI colloquium of Prof. Xavier Mestre from February 25, 2019 comprises the slides (below):

Abstract:

Conventional tools in array signal processing have traditionally relied on the availability of a large number of samples acquired at each sensor or array element (antenna, hydrophone, microphone, etc.). Large sample size assumptions typically guarantee the consistency of estimators, detectors, classifiers and multiple other widely used signal processing procedures. However, practical scenario and array mobility conditions, together with the need for low latency and reduced scanning times, impose strong limits on the total number of observations that can be effectively processed. When the number of collected samples per sensor is small, conventional large sample asymptotic approaches are not relevant anymore. Recently, large random matrix theory tools have been proposed in order to address the small sample support problem in array signal processing. In fact, it has been shown that the most important and longstanding problems in this field can be reformulated and studied according to this asymptotic paradigm. By exploiting the latest advances in large random matrix theory and high dimensional statistics, a novel and unconventional methodology can be established, which provides an unprecedented treatment of the finite sample-per-sensor regime. In this talk, we will see that random matrix theory establishes a unifying framework for the study of array signal processing techniques under the constraint of a small number of observations per sensor, which has radically changed the way in which array processing methodologies have been traditionally established. We will show how this unconventional way of revisiting classical array processing has lead to major advances in the design and analysis of signal processing techniques for multidimensional observations.

Bio:

Xavier Mestre received the MS and PhD in Electrical Engineering from the Technical University of Catalonia (UPC) in 1997 and 2002 respectively and the Licenciate Degree in Mathematics in 2011. During the pursuit of his PhD, he was recipient of a 1998-2001 PhD scholarship (granted by the Catalan Government) and was awarded the 2002 Rosina Ribalta second prize for the best doctoral thesis project within areas of Information Technologies and Communications by the Epson Iberica foundation. From January 1998 to December 2002, he was with UPC’s Communications Signal Processing Group, where he worked as a Research Assistant and participated actively in several European-funded projects. In January 2003 he joined the Telecommunications Technological Center of Catalonia (CTTC), where he currently holds a position as a Senior Research Associate and head of the Advanced Signal and Information Processing Department. During this time, he has actively participated in 8 European projects and two ESA contracts. He has been coordinator of the European ICT project EMPhAtiC (2012-15) and has participated in 6 industrial contracts, some of which have lead to commercialized products. He is author of three granted patents, 9 book chapters, 41 international journal papers and more than 90 articles in international conferences. He has been associate editor of the IEEE Transactions on Signal Processing (2008-11, 2015-present) and associate co-editor of the special issue on Cooperative Communications in Wireless Networks at the EURASIP Journal on Wireless Communications and Networking. He is IEEE Senior member and elected member of the IEEE Sensor Array and Multi-channel Signal Processing technical committee (2013-2018) and the EURASIP Special Area Teams on “Theoretical and  Methodological Trends in Signal Processing” (2015-present) and “Signal Processing in Communications” (2018-present). He has participated in the organization of multiple conferences and scientific events, such as the “IEEE Wireless Communications and Networking Conference 2018″ (general vice-chair), the “IEEE International Symposium on Power Line Communications” (technical chair), the “European Wireless 2014″ (general co-chair), the “European Signal Processing Conference 2011″ (general technical chair), the “IEEE Winter School on Information Theory” 2011 (general co-chair), the “Summer School on Random Matrix Theory for Wireless Communications” 2006 (general chair). He is general chair of the IEEE International Conference on Acoustics, Speech and Signal Processing 2020.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Review: Random Matrix Theory in Array Signal Processing: Application Examples [Slides]

Forgetful, shortsighted demons in wireless communications (in Kooperation mit der Lakeside Labs GmbH)

Harun Siljak, PhD | February 26, 2019 | 15:30 | B04.1.114 (Lakeside B04, Eingang b, 1. Stock)

Abstract:

The common theme of results presented in this talk is control of complex systems in wireless communications subject to information loss, either because of noise/equipment limitations or because of the controller’s inability to wait long enough or see far enough. Can we reconstruct the past and/or predict the future based on imperfect information and why would we want to do it in the first place?

Bio:

Harun Siljak obtained his BoE and MoE degrees in control engineering from University of Sarajevo in 2010 and 2012, respectively, and his PhD in electrical engineering from International Burch University Sarajevo in 2015. After working at International Burch University and Bell Labs Ireland, he joined Trinity College Dublin as an EDGE Marie Curie Fellow in 2017 to work on his project on complexity and control in distributed massive MIMO. His research interests include physics of computation, reversibility, wave propagation and nonlinear dynamics. His other interests include popular science and science fiction writing, as well as collaborations with artists and writers.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Forgetful, shortsighted demons in wireless communications (in Kooperation mit der Lakeside Labs GmbH)

Random Matrix Theory in Array Signal Processing: Application Examples

Prof. Xavier Mestre | February 25, 2019 | 11:00 | S.1.42

Abstract:

Conventional tools in array signal processing have traditionally relied on the availability of a large number of samples acquired at each sensor or array element (antenna, hydrophone, microphone, etc.). Large sample size assumptions typically guarantee the consistency of estimators, detectors, classifiers and multiple other widely used signal processing procedures. However, practical scenario and array mobility conditions, together with the need for low latency and reduced scanning times, impose strong limits on the total number of observations that can be effectively processed. When the number of collected samples per sensor is small, conventional large sample asymptotic approaches are not relevant anymore. Recently, large random matrix theory tools have been proposed in order to address the small sample support problem in array signal processing. In fact, it has been shown that the most important and longstanding problems in this field can be reformulated and studied according to this asymptotic paradigm. By exploiting the latest advances in large random matrix theory and high dimensional statistics, a novel and unconventional methodology can be established, which provides an unprecedented treatment of the finite sample-per-sensor regime. In this talk, we will see that random matrix theory establishes a unifying framework for the study of array signal processing techniques under the constraint of a small number of observations per sensor, which has radically changed the way in which array processing methodologies have been traditionally established. We will show how this unconventional way of revisiting classical array processing has lead to major advances in the design and analysis of signal processing techniques for multidimensional observations.

Bio:

Xavier Mestre received the MS and PhD in Electrical Engineering from the Technical University of Catalonia (UPC) in 1997 and 2002 respectively and the Licenciate Degree in Mathematics in 2011. During the pursuit of his PhD, he was recipient of a 1998-2001 PhD scholarship (granted by the Catalan Government) and was awarded the 2002 Rosina Ribalta second prize for the best doctoral thesis project within areas of Information Technologies and Communications by the Epson Iberica foundation. From January 1998 to December 2002, he was with UPC’s Communications Signal Processing Group, where he worked as a Research Assistant and participated actively in several European-funded projects. In January 2003 he joined the Telecommunications Technological Center of Catalonia (CTTC), where he currently holds a position as a Senior Research Associate and head of the Advanced Signal and Information Processing Department. During this time, he has actively participated in 8 European projects and two ESA contracts. He has been coordinator of the European ICT project EMPhAtiC (2012-15) and has participated in 6 industrial contracts, some of which have lead to commercialized products. He is author of three granted patents, 9 book chapters, 41 international journal papers and more than 90 articles in international conferences. He has been associate editor of the IEEE Transactions on Signal Processing (2008-11, 2015-present) and associate co-editor of the special issue on Cooperative Communications in Wireless Networks at the EURASIP Journal on Wireless Communications and Networking. He is IEEE Senior member and elected member of the IEEE Sensor Array and Multi-channel Signal Processing technical committee (2013-2018) and the EURASIP Special Area Teams on “Theoretical and  Methodological Trends in Signal Processing” (2015-present) and “Signal Processing in Communications” (2018-present). He has participated in the organization of multiple conferences and scientific events, such as the “IEEE Wireless Communications and Networking Conference 2018″ (general vice-chair), the “IEEE International Symposium on Power Line Communications” (technical chair), the “European Wireless 2014″ (general co-chair), the “European Signal Processing Conference 2011″ (general technical chair), the “IEEE Winter School on Information Theory” 2011 (general co-chair), the “Summer School on Random Matrix Theory for Wireless Communications” 2006 (general chair). He is general chair of the IEEE International Conference on Acoustics, Speech and Signal Processing 2020.

 

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Random Matrix Theory in Array Signal Processing: Application Examples

Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems

Priv.-Doz. Dr. Rick Rabiser | February 7, 2019 | 10:00 | S.2.42

Abstract

Complex software-intensive systems are often described as systems of systems (SoS) due to their heterogeneous architectural elements. As SoS behavior is often only understandable during operation, runtime monitoring is needed to detect deviations from requirements. Today, while diverse monitoring approaches exist, most do not provide what is needed to monitor SoS, e.g., support for dynamically defining and deploying diverse checks across multiple systems. In this talk, I will describe our experiences of developing, applying, and evolving an approach for monitoring an SoS in the domain of industrial automation software, that is based on a domain-specific language (DSL). I will first describe our initial approach to dynamically define and check constraints in SoS at runtime, including a demo of our monitoring tool REMINDS, and then motivate and describe its evolution based on requirements elicited in an industry collaboration project. I will furthermore describe solutions we have developed to support the evolution of our approach, i.e., a code generation approach and a framework to automate testing the DSL after changes. We evaluated the expressiveness and scalability of our new DSL-based approach using an industrial SoS. At the end of the talk, I will also present general lessons we learned and give an overview of other projects in the area of software monitoring as well as other areas such as software product lines, that I am currently involved in.

Bio

Rick Rabiser (https://mevss.jku.at/rabiser) is currently a senior researcher at the Christian Doppler Laboratory for Monitoring and Evolution of Very-Large-Scale Software Systems (VLSS) at Johannes Kepler University Linz, Austria. In this lab, he heads the research module on requirements-based monitoring and diagnosis in VLSS evolution, with Primetals Technologies Austria as industry partner. He holds a Master’s and a Ph.D. degree in Business Informatics as well as the venia docendi (Habilitation) in Practical Computer Science from Johannes Kepler University Linz. His research interests include but are not limited to variability management, software maintenance and evolution, systems and software product lines, automated software engineering, requirements engineering, requirements monitoring, and usability and user interface design. Dr. Rabiser co-authored over 120 (peer-reviewed) publications; served in 80+ program committees and 25+ conference and workshop organization committees; and frequently reviews articles for several international journals like IEEE TSE, IEEE TSC, ACM CSUR, EMSE, JSS, and IST. He is also a member of the steering committee of the Euromicro SEAA conference series and a member of the Euromicro Board of Directors (Director for Austria) and the Euromicro Executive Office (Publicity Secretary). He is also an elected member of the steering committee of the International Systems and Software Product Line Conference (SPLC). He currently is the speaker of computer scientists at JKU Linz, who are not full professors (Fachbereichssprecher Mittelbau Informatik).

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Developing and Evolving a DSL-Based Approach for Runtime Monitoring of Systems of Systems

Effective model-based approaches for automated software testing

Prof. Giorgio Brajnik | January 23, 2019 | 11:00 | N.1.42 (Germanistik)

Abstract

Testing lies at the heart of software development. Tightly woven with requirements engineering, the testing process influences how software is developed and its quality.  With adoption of agile and devops approaches, the continuous testing process has to rely on a testing strategy that is multi-level and has to balance test automation and exploratory testing.  Because so many things need to be tested, and because the system under test changes very often and rapidly, effectiveness and sustainability of the testing process is a must.

I will present an approach for automating end-to-end testing that is based on UML specifications of the behavior of the system and a toolkit that automatically generates source code supporting definition of high level test cases and related artifacts. In this way, a software development team can avoid dealing with low level details and focus instead on what needs to be tested, what test conditions need to be covered, how test results affect requirements coverage. This kind of information constitutes then a living documentatio of the system specification which can be used to guide exploratory testing. Such an approach is currently being used in mobile apps (in the area of workforce management) and web apps (in the financial domain).

Bio

Giorgio Brajnik is associate professor at the Computer Science Department of the University of Udine, Italy. He holds a degree in Computer Science (from the University of Udine) and a PhD in Computer Science (from the University of Manchester). After working on information search systems, since 1999 his focus is on methods for effective assessment of accessibility and quality of websites and web applications and more recently on model-based techniques for analysis of user interfaces.

At the university he teaches courses on object oriented programming and accessibility and user centered web development.  In ’92 and ’95-’96 he was visiting scholar at the University of Texas at Austin. He has been invited lecturer, panelist and visiting professor in Europe, the U.S. and New Zealand. He participated to several of the W3C working groups dealing with accessibility. He also supervised the development of accessibility testing tools when he was working with a company he cofounded, Usablenet Inc.  Currently he is scientific advisor for Interaction Design Solutions, a startup company he co-founded that is specialized on model-driven techniques for software system testing.

He is program committee member of several conferences, including the International Cross-Disciplinary Conference on Web Accessibility and ACM Assets, for which he was co-chair of the Doctoral Consortium  and also General Chair; he is regular reviewer for several journals. Additional details are available at www.dimi.uniud.it/giorgio/vitae.html.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Effective model-based approaches for automated software testing
RSS
EMAIL
FACEBOOK
TWITTER