Sheffield Machine Learning Seminar Series [Upcoming | Past]

Upcoming Seminars

  1. Date: 18 December 2017 (Monday); Time: 4:30pm-5:30pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Neuroinformatics of learning, memory and decision making: from model-based analyses to individualized cognitive neurotherapeutics;
    Speaker: Dr Gedi Luksys, University of Edinburgh.
    Host: Eleni Vasilaki

    Abstract
    How we learn, recall our memories, and use them for making decisions depend on our genes as well as on environmental modulators, such as stress, emotion and uncertainty. Cognitive performance is the outcome of several neurobiologically distinct mental processes, some of which are not easily amenable to direct observation. Their roles and interactions can, however, be dissociated with computational models. Using examples from animal learning under stress and imaging genetics of human memory, I will show how computational models can be used to discover neural and genetic correlates of cognitive phenomena, and suggest their computational explanations. I will also show how databases and multi-voxel pattern analyses can be used to inform and improve neuroimaging-based correlates of learning and memory, and discuss about ecological relevance of human experimental setups using schema-based learning and decision making as an example. Finally, I will propose applications of model-bases analyses and future directions in cognitive neuroinformatics, such as automated characterization of individual decision making profiles from big data and individualized cognitive neurotherapeutics.

    Biography
    Gedi Luksys did his PhD in Computational & Behavioural Neuroscience at EPFL, Switzerland in 2009 and postdoc at University of Basel, Switzerland, working on human imaging genetics and model-based analysis of memory. In 2016 he started a lectureship at the University of Edinburgh.

    His research explores several directions of computational cognitive neuroscience, such as modeling animal and human behaviour in various learning, memory and decision making setups, studying the role of schemas on learning and decision making as well as employing multi-voxel pattern analysis. He also investigates computational roles of stress and other modulators, how they interact with individual traits to control behaviour and cognition, and how they can be used to improve cognitive performance and alleviate adverse effects of mental disorders.


 

Past Seminars

  1. Date: 29 March 2017 (Wednesday); Time: 10:00am-11:00am; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Stochastic (Partial) Differential Equations and Gaussian Processes;
    Speaker: Prof. Simo Särkkä, Aalto University, Finland.
    Host: Mauricio A Alvarez Lopez
    Abstract
    Stochastic partial differential equations and stochastic differential equations can be seen as alternatives to kernels in representation of Gaussian processes in machine learning and inverse problems. Linear operator equations correspond to spatial kernels, and temporal kernels are equivalent to linear It\^o stochastic differential equations. The differential equation representations allow for the use of differential equation numerical methods on Gaussian processes. For example, finite-differences, finite elements, basis function methods, and Galerkin methods can be used. In temporal and spatio-temporal case we can use linear-time Kalman filter and smoother approaches.
    Biography
    Prof. Simo Särkkä received his Master of Science (Tech.) degree (with distinction) in engineering physics and mathematics, and Doctor of Science (Tech.) degree (with distinction) in electrical and communications engineering from Helsinki University of Technology, Espoo, Finland, in 2000 and 2006, respectively. From 2000 to 2010 he worked with Nokia Ltd., Indagon Ltd., and Nalco Company in various industrial research projects related to telecommunications, positioning systems, and industrial process control. From 2010 to 2013 he worked as a Senior Researcher with the Department of Biomedical Engineering and Computational Science (BECS) at Aalto University, Finland.

    Currently, Dr. Särkkä is an Associate Professor and Academy Research Fellow with Aalto University, Technical Advisor and Director of IndoorAtlas Ltd., and an Adjunct Professor with Lappeenranta University of Technology. In 2013 he was a Visiting Professor with the Department of Statistics of Oxford University and in 2011 he was a Visiting Scholar with the Department of Engineering at the University of Cambridge, UK. His research interests are in multi-sensor data processing systems with applications in location sensing, health technology, machine learning, inverse problems, and brain imaging. He has authored or coauthored ~80 peer-reviewed scientific articles and has 3 granted patents. His first book "Bayesian Filtering and Smoothing" and its Chinese translation was recently published via the Cambridge University Press. He is a Senior Member of IEEE and serving as an Associate Editor of IEEE Signal Processing Letters from August 2015.

  2. Date: 5 April 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: On the (statistical) detection of adversarial examples;
    Speaker: Kathrin Grosse, Saarland University, Germany.
    Host: Mike Smith
    Abstract
    Imagine meeting a dear friend and thinking he is your mother, because he is wearing some glasses. Wait, what? Out of question for most of us, however reality for many machine learning models. So called adversarial examples are original samples where an adversary computes an optimal perturbation, leading to a different classification. To humans, however, the two examples are in most cases not distinguishable. The automated detection of such adversarial examples remains an open problem, since the perturbations are a consequence of an inherent property of all classifiers: the gradient of the decision function.

    In this talk, we will first briefly review how adversarial examples are computed (using the example of malware data). We then move to our work on how to detect adversarial examples, presenting two approaches. One confidently detects adversarial examples, however only when presented in a batch. Another approach works as well for a single example, yet is does not work as a reliable defence in all cases: In some cases, it only increases the cost of the attack.
    Biography
    I studied cognitive sciences at Osnabrück (Lower Saxony, Germany). I specialized in computer science, AI and neurobiology. In my term abroad at the Universidad del Sur in Bahia Blanca, Argentina, we (joint work with Carlos Chesñevar) started to work on opinion mining on Twitter. Around then I decided to work in Data Mining and ML, and continued my studies at Saarland University (since there is a lot of Data Mining/ML specialists there). I wrote my master thesis on text mining in Jilles Vreeken's group, who does Minimum Description Length based exploratory data analysis. Although always enjoying data mining and machine learning in itself, I became interested in security, particularly the security of ML. Due to this interest, I started working (as a phd student) on this topic at Michael Backes group at CISPA.

  3. Date: 26 April 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Computational challenges in genomics and personalized medicine;
    Speaker: Dr Dennis Wang, University of Sheffield.
    Host: Haiping Lu
    Abstract
    Recent technological advances in the high-throughput profiling of DNA, RNA and proteins in human tissue have lead to a wealth of genomic data and better understanding of complex diseases, such as cancer, diabetes and neuro-degeneration. This, however, has also lead to two major computational challenges of classification and feature selection. In this talk, I will highlight examples where supervised and unsupervised clustering approaches have been applied by drug developers aiming to produce "personalized" medicines from genomic data. I will also point out further machine learning problems and areas for collaboration with Sheffield's medical and biological research communities who are dealing with increasing amounts of genomic data.
    Biography
    I graduated from the University of British Columbia (Vancouver, Canada) and completed my undergrad dissertation at the European Bioinformatics Institute (Cambridge, UK). I then moved to the University of Cambridge for an MPhil working on boolean networks with Dr. Jasmin Fisher and Dr. Andrew Phillips (Microsoft Research), and a PhD working on statistical genetics with Prof Lorenz Wernisch and Prof Willem Ouwehand (MRC Biostatistics Unit).

    Following the completion of my PhD in 2012, I undertook postdoctoral training to build a genomics core and identify biomarkers at the Princess Margaret Cancer Centre (Toronto, Canada) with Prof Ming-Sound Tsao and Prof Frances Shepherd. I was promoted to a staff scientist in 2013 to coordinate the genomic profiling and bioinformatics analysis of patient tumors. With a greater interest in drug development, I went back to Cambridge in 2014 to join the early drug discovery division of AstraZeneca where I developed computational methods to identify pharmacological biomarkers that predict drug response. I joined the University of Sheffield in 2016 as a lecturer to further establish genomics and bioinformatics as cornerstones within the education and research programmes at the medical school.

  4. Date: 03 May 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: RSS-based Indoor Localization using Gaussian Processes;
    Speaker: Dr Roland Hostettler, Aalto University, Finland.
    Host: Mauricio A Alvarez Lopez
    Abstract
    Location-based services such as augmented reality benefit greatly from accurate position estimates. In outdoor environments, global navigation satellite systems often offer adequate performance for a broad range of applications. However, performance of these systems degrades quickly when moving to the indoors due to the complex building structures affecting the satellite signals. Additionally, in indoor environments, an error of only a few meters can be the difference between two rooms or even floors. In this talk, we will discuss an alternative localization approach based on fingerprint maps of radio signals. The method uses Gaussian processes to model the radio landscape and can be used with, for example, WiFi or Bluetooth signals.
    Biography
    Roland Hostettler received the Dipl. Ing. degree in electrical and communication engineering from Bern University of Applied Sciences, Switzerland in 2007, and the M.Sc. degree in electrical engineering and Ph.D. in automatic control from Luleå University of Technology, Sweden in 2009 and 2014, respectively. From October 2014 to January 2016, he was a research associate at the Control Engineering Group at Luleå University of Technology. Since February 2016 he is a postdoctoral researcher at the Department of Electrical Engineering and Automation at Aalto University, Finland.

    His main research interests are statistical signal processing in general and parameter inference, state estimation, and sensor fusion in particular with applications in localization, tracking, as well as activity and health monitoring.

  5. Date: 10 May 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Pool Seminar Room G03, 9 Mappin Street;
    Title: Some frontiers in deep reinforcement learning;
    Speaker: Dr Tom Schaul, Google DeepMind.
    Host: Eleni Vasilaki
    Abstract
    Two desiderata for general intelligence are performance and generality. The first requires acting to achieve goals or solve problems; the second asks for agents that are competent on a diversity of tasks, or at least can learn to become so -- with minimal teaching signal, if possible. After decades of focus on performance, recent research has started to emphasize the aspect of generality, building on two key ingredients, namely reinforcement learning (RL) and deep neural networks. I will discuss some of their strengths and weaknesses, put recent breakthroughs into context (e.g. AlphaGo, DQN), and sketch outs some ongoing directions that could push generality even further.
    Biography
    Tom Schaul is a senior researcher in reinforcement learning at DeepMind. He did his PhD with Jürgen Schmidhuber at IDSIA and his Postdoc with Yann LeCun at NYU. He has published many areas of AI, including deep learning, optimization algorithms, artificial curiosity, evolutionary algorithms, and most recently on deep and hierarchical RL. He thinks that substantial progress on general AI is possible, and that games are perfect benchmark domains for that.

  6. Date: 07 June 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Statistical long-term excitatory and inhibitory synaptic plasticity;
    Speaker: Dr Tim Vogels, University of Oxford.
    Host: Eleni Vasilaki
    Abstract
    Long-term modifications in neuronal connections are critical for reliable memory storage in the brain. However, pre- and postsynaptic components can make synapses highly unreliable. How synaptic plasticity modifies this variability is poorly understood. Here we introduce a theoretical framework in which long-term plasticity performs an optimisation of the postsynaptic response statistics constrained by physiological bounds. In this framework of statistical long-term synaptic plasticity the state of the synapse at the time of plasticity induction determines the ratio of pre- and postsynaptic changes. When applied to plasticity of excitatory synapses, our theory explains the observed diversity in expression loci of individual hippocampal and neocortical potentiation and depression experiments. Moreover, our theory predicts changes at inhibitory synapses that are bounded by the mean excitation, which suggests an efficient excitation-inhibition balance in the brain. Our results propose a principled view of the diversity in expression loci of long-term synaptic plasticity observed in a wide range of slice experiments and reveal a statistically optimal, excitation-inhibition balance in the intact brain.
    Biography
    Tim Vogels studied physics at Technische Universität Berlin and neuroscience at Brandeis University as a Fulbright Scholar. He received his PhD in 2007 in the laboratory of Larry Abbott. After a postdoctoral stay as a Patterson Brain Trust Fellow with Rafa Yuste at Columbia University, he became a Marie Curie Reintegration Fellow in the laboratory of Wulfram Gerstner at the École Polytechnique Fédérale de Lausanne (EPFL). Tim was awarded the Bernstein Award for Computational Neuroscience in 2012.

    Tim Vogels arrived at Oxford in 2013 and is establishing a research group in theoretical and computational neuroscience within the Centre of Neural Circuits and Behaviour. As a computational neuroscientist, he builds conceptual models to understand the fundamentals of neural systems at the cellular level. His research group is funded by a Sir Henry Dale Fellowship of the Wellcome Trust and the Royal Society and part of the neurotheory initiative at the University of Oxford.

  7. Date: 14 June 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Factorised Gaussian Process Models;
    Speaker: Dr Carl Henrik Ek, University of Bristol.
    Host: Mauricio A Alvarez Lopez
    Abstract
    Regression is the task of relating an input variate to an output domain by the means of a function. To learn the mapping from data means that we are faced with the daunting task of specifying a distribution over the space of functions. Gaussian process priors allow us to do just this in an interpretable and flexible manner. However, for many types of data the relationship cannot be described by a function as there are multiple parts of the output domain corresponding to the same input location. In this scenario we often have to resort to latent variable models in order to capture the relationship which are often characterised by expensive and challenging inference scenarios.

    In this talk I will describe a set of different approaches to modelling in the above described scenario. Our idea is that we can build models that learns a factorisation of the variations in the data such that we can simplify the inference problem. I will exemplify some of these models on real-data from robotics and computer vision.
    Biography
    Dr. Carl Henrik Ek is a lecturer at the University of Bristol. His reasearch focuses on developing computational models that allows machines to learn from data. In specific he is interested in Bayesian non-parametric models which allows for principled quantification of uncertainty, easy interpretability and adaptable complexity. He has worked extensively on models based on Gaussian process priors with applications in robotics and computer vision.

  8. Date: 21 June 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: From Random Projections to Learning Theory and Back;
    Speaker: Dr Ata Kaban, University of Birmingham.
    Host: Haiping Lu
    Abstract
    We consider two problems in statistical machine learning -- an old and a new:
    (1) Given a machine learning task, what kinds of data distributions make it easier or harder? For instance, it is known that large margin makes classification tasks easier.
    (2) Given a high dimensional learning task, when can we solve it from a few random projections of the data with good-enough approximation? This is the compressed learning problem.
    This talk will present results and work in progress that highlight parallels between these two problems. The implication is that random projection -- a simple and effective dimensionality reduction method with origins in theoretical computer science -- is not just a timely subject for efficient learning from large high dimensional data sets, but it can also help us make a previously elusive fundamental problem more approachable. On the flip side, the parallel allows us to broaden the guarantees that hold for compressed learning beyond of those initially inherited from compressed sensing.
    Biography
    Ata Kaban is a senior lecturer in Computer Science at the University of Birmingham UK, and EPSRC Early Career Fellow. Her research interests include statistical machine learning and data mining in high dimensional data spaces, algorithmic learning theory, probabilistic modelling of data, and black-box optimisation. She authored / co-authored 80 peer-reviewed papers, including best paper awards at GECCO'13, ACML'13, ICPR'10, and a runner-up at CEC'15. She was recipient of an MRC Discipline Hopping award in 2008/09. She holds a PhD in Computer Science (2001) and a PhD in Musicology (1999). She is member of the IEEE CIS Technical Committee on Data Mining and Big Data Analytics, and vice-chair of the IEEE CIS Task Force on High Dimensional Data Mining.

  9. Date: 20 September 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Pool Seminar Room G03, 9 Mappin Street;
    Title: Frequency Content Priors for Gaussian Process Pitch Detection in Polyphonic Music;
    Speaker: Pablo Alvarado, Queen Mary University of London.
    Host: Mauricio A Alvarez Lopez
    Abstract
    Automatic music transcription (AMT) aims to infer a latent symbolic representation (piano-roll, MIDI) of a piece of music, given a corresponding observed audio recording. Transcribing polyphonic music, i.e. when multiple notes are played simultaneously, is a challenging problem, due to highly structured overlapping between harmonics. We introduce acoustically inspired Gaussian process (GP) priors into audio content analysis models to improve the detection of patterns required for AMT. In the proposed approach, audio signals are described as a linear combination of sources. Each source is decomposed into the product of an amplitude-envelope activation, and a quasi-periodic component process. We propose the Matérn spectral mixture (MSM) kernel for describing frequency content of single music notes and consider two different regression approaches. In the sigmoid model activation processes are independently non-linear transformed. In the softmax model activation functions are jointly non-linearly transformed. We use variational Bayes for approximate inference, and empirically evaluate how these models work in practice transcribing polyphonic music.
    Biography
    Pablo Alvarado received a degree in Electronic Engineering (B. Eng.), and a master in Electrical Engineering (M. Eng.) from Universidad Tecnológica de Pereira, Colombia, in 2013 and 2014 respectively. In March 2015 Pablo Alvarado joined, as a Ph.D. student, the Centre for Digital Music (C4DM) at Queen Mary University of London, UK. Pablo is interested in machine learning, Gaussian process regression, music audio content analysis, and signal processing. He works on the development of new approaches for pitch detection applied to automatic music transcription using Gaussian processes.

  10. Date: 04 October 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Information-Geometric Policy Search for Learning Versatile, Reusable Skills;
    Speaker: Professor Gerhard Neumann, University of Lincoln.
    Host: Mauricio A Alvarez Lopez
    Abstract
    In the future, autonomous robots will be used for various applications such as autonomous farming, handling dangerous materials as for example decommissioning nuclear waste, health care or autonomous transportation. For such complex scenarios, it is inevitable that autonomous robots are equipped with sophisticated learning capabilities which enable them to learn from human teachers as well as from self-improvement. In this talk, I will present our work on information-geometric policy search methods for learning complex motor skills. Our algorithms use information-geometric insights to exploit curvature and path information in order to perform efficient local search at the level of single elemental motions, also called movement primitives. Simultaneously to local search, the algorithms search on a global level by selecting between distinct solutions, allowing us to represent a versatile solution space with high quality solutions. Our algorithms can be used to efficiently learn motor skills, generalize these motions to different situations, learn reactive skills that can react to perturbations and select and learn when to switch between these motions. I will also briefly show how to extend our algorithms to learn from preference-based feedback instead of a numeric reward signal, enabling a human expert to guide the learning agent without the need for manual reward tuning. While I will use dynamic motor games, such as table tennis, as motivation throughout my talk, I will also shortly present how to apply similar methods for robot grasping and learning in robot swarms.
    Biography
    Gerhard Neumann is a Professor of Robotics & Autonomous Systems at the University of Lincoln where he started in November 2016. Before coming to Lincoln, he has been an Assistant Professor at the TU Darmstadt from September 2014 to October 2016 and head of the Computational Learning for Autonomous Systems (CLAS) group. Before that, he was Post-Doc and Group Leader at the Intelligent Autonomous Systems Group (IAS) also in Darmstadt under the guidance of Prof. Jan Peters. Gerhard obtained his Ph.D. under the supervision of Prof. Wolfgang Mass at the Graz University of Technology. Gerhard already authored 50+ peer reviewed papers, many of them in top ranked machine learning and robotics journals or conferences such as NIPS, ICML, ICRA, IROS, JMLR, Artificial Intelligence, Machine Learning, AURO and IJRR. In Lincoln, he is leading a collaboration with Toyota and PI on 2 Innovate UK projects. In Darmstadt, he is PI of the EU H2020 project Romans and of a DFG project. He organized several workshops and is senior program committee for several conferences.

  11. Date: 18 October 2017 (Wednesday); Time: 3:00pm-4:00pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Fuzzy Rules for Recursive Bayesian Filtering in Multi-Process State Models ;
    Speaker: Dr Wil Ward, University of Sheffield.
    Host: Mauricio A Alvarez Lopez
    Abstract
    Systems under the influence of uncertain dynamic processes can pose a distinct challenge for predictive estimators, especially in the case where there are multiple non-linear processes influencing the system state to varying degree. In a wide range of application domain problems, including sensing data and target tracking, there are complex system processes that occur simultaneously or consecutively at unknown intervals. The general case can by modelled using a Markov-switching state space model, where each SSM represents a process affecting the state. The challenge lies in the weighting of each process's effect. Current solutions use a switching probability matrix to weight an inference that propagates a set of parallel Kalman filters. However, the switching probability can drastically affect the results, and in real world cases it is often unknown and potentially highly variable. The alternative approach presented approaches to problem by combining aspects of fuzzy inference with a recursive Bayesian inference. Based on predictions of the state in each SSM of the switching model, the corresponding pdfs of the estimated observation can be considered membership functions to fuzzy sets over each component process. Each process can be considered a linguistic value in the overall SSM. The resulting derivation gives great flexibility and has an intuitive setup. Under certain conditions, inferring the switching probability can be shown to be equivalent to probabilistic models, such as a Gaussian mixture model.
    Biography
    Wil Ward is a Research Associate in Deep Probabilistic Machine Learning at the University of Sheffield, starting from October 2017. Previously, he studied his undergraduate to Masters level in Mathematics and Computer Science at the University of Nottingham. He went on to study a PhD in Computer Science in a collaborative project with the British Geological Survey, funded by the BGS-University Funding Initiative. The research project dealt with developing and adapting Computer Vision and Machine Learning techniques for Electrical Resistivity Tomography images. Prior to this, he briefly worked as a research assistant looking at image analysis for medical data. His work has appeared across a range of application publications, including the proceedings of EMBC, and journals including Geophysical Journal International and Water Resources Research.

  12. Date: 25 October 2017 (Wednesday); Time: 3:30pm-4:30pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Colliding Galaxies, Machine Learning and Systems Medicine: Could Machine Learning Ever Cure Alzheimer's Disease?;
    Speaker: Professor Winston Hide, University of Sheffield.
    Host: Haiping Lu

    Abstract
    Recent advances in application of machine learning to genomic scale data interpretation for drug repurposing have brought biotechnology and data science into stark relief. This sea change reflects the challenge and opportunities created by two behemoths which like galaxies, are slowly colliding. Organisations are waking up to the value of having information that comes from existing datasets, generating targeted data, and looking within it to drive insight, rather than establishing an hypothesis about a particular drug target/drug combination and finding anecdotal data to support or refute it.

    As deep learning is more broadly applied it has started to touch the world of this high dimensional data driven drug discovery. But the application of drug repurposing to systems medicine rests to a large extent on better understanding of the pharmacological properties of small molecules based on their transcriptional response signatures in cell lines and model systems.

    To move the field forwards, ML and Systems Medicine approaches can be combined to leverage the ability of ML to learn and predict across large data while exploiting systems aspects of networked relationships of genes and how the topology of these relationships drive disease. Both must overcome the diversity and noise of data types, the challenge of data sharing, and a better understanding of the underlying features which drive selection of drug and target combinations. Both must have far better benchmarking to drive definition of well defined outcomes.

    The presentation will describe the collision points with machine learning and the steps we are taking to help address its inherent challenges in successful prediction of drug/target combinations.

    Biography
    Winston Hide gained a PhD at Temple University in Philadelphia in 1992, and performed post-doctoral training in molecular evolution at the University of Texas in Houston, Baylor College of Medicine at Human Genome Centre with Richard Gibbs and at the Smithsonian Museum of Natural History in Washington DC. He was as director of genomics at MasPar high performance computer corporation in Silicon Valley, then he returned to South Africa to found and direct the South African National Bioinformatics Institute (SANBI) at the University of Western Cape in 1996. An author of the National Biotechnology Strategy for South Africa he co-founded the South African National Bioinformatics Network. A Kerr Fellow of the Ludwig Cancer Institute, he was a member of the Brazilian team investigating the cancer transcriptome. A co-founder of the WHO regional training programme in Bioinformatics, he co-led development of training for bioinformatics across the Southern Hemisphere. Focusing on the development of Africa's peoples he is a founder member of the steering committee that established the NIH-Wellcome Trust funded Pan African H3 Africa genome Initiative. Together with a group from the WHO, Yale, and the US Sanger Center he established the International Glossina Genome Initiative in 2005 which culminated in the publication of the Tsetse Fly genome in 2014. Hide has been recognised for these activities by receipt of the first International Society for Computational Biology award for outstanding achievement.

    Hide was until recently, Associate Professor of Bioinformatics and Computational Biology in the Department of Biostatistics at Harvard School of Public Health where he has led development of bioinformatics addressing personal genomics approaches to public health and where he directed the HSPH Bioinformatics Core. Hide developed the bioinformatics strategy for the Harvard Stem Cell Institute and was Director of its Center for Stem Cell Bioinformatics where he built a science commons for big data sharing and integration.

    Hide now is Chair in Computational Biology at the University of Sheffield where he drives development of systems medicine and genome translation at the Sheffield Institute for Translational Neurosciences.

    His research addresses integration of genomic data to deliver clinical translation of genomics for application to repurposing of drugs, to determine prioritised drug targets and to deliver prediction to predisposition to disease. Hide directs the bioinformatics for the Alzheimer's Genome Project (Cure Alzhiemer's Foundation) where he has built the computational infrastructure for, and analyzed 1500 whole genome sequences from patients with the disease and is a driving member of the CureAD CIRCUITS consortium, a group made up of scientists from Harvard, MIT, USCF and Sheffield funded by the Cure Alzhiemer's Foundation to determine the regulatory processes that go awry in Alzheimer's. Currently a visiting faculty member of the Computer Science and Artificial Intelligence Laboratory at MIT, Hide also leads the Sheffield Node of the Connected Health Cities Consortium of Northern Universities.

  13. Date: 01 November 2017 (Wednesday); Time: 3:30pm-4:30pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Mapping neural activity to conceptual structure with regularized regression;
    Speaker: Dr Christopher Cox, University of Manchester.
    Host: Haiping Lu

    Abstract
    Concepts are integral to cognition and foundational to knowledge, but how concepts are represented in the brain is not well understood. Influential computational models of cognition have simulated how concepts develop and are utilized as distributed representations in artificial neural networks (ANNs). Distributed representation is also a principle at the core of contemporary deep-learning networks, which for example power today's most successful computer speech and image recognition models, and have achieved stunning feats like Google DeepMind's AlphaGo teaching itself to play the board game Go so well that it consistently beats elite human players. While ANNs are "neurally inspired", their usefulness as models of how brains actually represent knowledge is contested. I argue this is largely because neuroimaging data is typically explored in ways that are not sensitive to distributed representation, even if it does exist. I will present two techniques based on regularized linear regression that I am using to provide a fairer test of whether neural representations of concepts are distributed, in the ANN sense. These techniques are applied to neuroimaging datasets, yielding results which generally support the idea that distributed representation may be a useful analogue to concepts at the neural level, in addition to the cognitive level.

    Biography
    Chris Cox received a PhD in Cognitive Psychology from the University of Wisconsin - Madison in December 2016, and is currently a post-doctoral researcher in the Neuroscience and Aphasia Research Unit at the University of Manchester. In the fall of 2018 he will begin a faculty position in the Psychology Department at Louisiana State University.

    Dr Cox is a cognitive neuroscientist who studies how conceptual knowledge is learned, stored, and retrieved. This begins with asking questions about what representations are like, in the first place. Computational modelling provides the basis for forming hypotheses that attempt to resolve this basic question, and behavioural and neuroimaging experimentation provides a means for evaluating these hypotheses. A critical part of his research program relies on collaborations with engineers, mathematicians, and computer scientists to develop and apply optimization routines and computational workflows that are sophisticated enough to test these hypotheses as well as currently possible.

  14. Date: 23 November 2017 (Thursday); Time: 3:30pm-4:30pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Cortical microcircuits as gated-recurrent neural networks;
    Speaker: Dr Rui Ponte Costa, University of Bern (Switzerland) and University of Oxford.
    Host: Eleni Vasilaki

    Abstract
    Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), I will describe a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits. Empirical evaluation across sequential image classification and language modelling tasks shows that subLSTM units can achieve similar performance to LSTM units. These results suggest that cortical circuits can be optimised to solve complex contextual problems and proposes a novel view on their computational function. Overall our work provides a step towards unifying recurrent networks as used in machine learning with their biological counterparts.

    (Joint work with: Yannis M Assael, Brendan Shillingford, Nando de Freitas and Tim P Vogels; paper available here: https://arxiv.org/abs/1711.02448)

    Biography
    I have a PhD in Computational Neuroscience & Machine Learning from the University of Edinburgh (2015), and did postdoctoral work at the University of Oxford (2014-2017; 2017- visiting researcher) in collaboration with theoretical, experimental and machine learning labs. Currently, I am a postdoc at University of Bern trying to teach recurrent cortical microcircuits.

    My research focuses on the development of statistical and computational methods to better understand the principles that underlie intelligent behaviour. In particular, I develop unifying models of neural learning and networks grounded on experimental data, which shed light on how the brain solves complex computational problems and inspire novel machine learning algorithms.

  15. Date: 29 November 2017 (Wednesday); Time: 3:30pm-4:30pm; Venue: Ada Lovelace (Regent Court COM-108);
    Title: Learning Non-Stationary Data Streams With Gradually Evolved Classes;
    Speaker: Dr Leandro Minku, University of Leicester.
    Host: Haiping Lu

    Abstract
    In machine learning, class evolution is the phenomenon of class emergence and disappearance. It is likely to occur in many data stream problems, which are problems where additional training data become available over time. For example, in the problem of classifying tweets according to their topic, new topics may emerge over time, and certain topics may become unpopular and not discussed anymore. Therefore, class evolution is an important research topic in the area of learning data streams. Existing work implicitly regards class evolution as an abrupt change. However, in many real world problems, classes emerge or disappear gradually. This gives rise to extra challenges, such as non-stationary imbalance ratios between the different classes in the problem. In this talk, I will present an ensemble approach able to deal with gradually evolved classes. In order to quickly adjust to class evolution, the ensemble maintains a base learner for each class and dynamically creates, updates and (de)activates base learners whenever new training data become available. It also uses a dynamic undersampling technique in order to deal with the non-stationary class imbalance present in this type of problem. Empirical studies demonstrate the effectiveness of the proposed approach in various class evolution scenarios in comparison with existing class evolution approaches.

    Biography
    Dr. Leandro L. Minku is a Lecturer at the Department of Informatics, University of Leicester (UK). Prior to that, he was a research fellow at the University of Birmingham (UK) for five years. He received the PhD degree in Computer Science from the University of Birmingham (UK) in 2010. During his PhD, he was the recipient of the Overseas Research Students Award (ORSAS) from the British government and was invited to a 6-month internship at Google. Dr. Minku's main research interests are machine learning in non-stationary environments / data stream mining, online class imbalance learning, ensembles of learning machines and computational intelligence for software engineering. His work has been published in internationally renowned journals such as IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Software Engineering and ACM Transactions on Software Engineering and Methodology. Among other roles, Dr. Minku was a co-chair for the IJCAI'17 Workshop on Learning in the Presence of Class Imbalance and Concept Drift and a guest editor for the Neurocomputing Special Issue on this topic, and is a steering committee member for the International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE), an Associate Editor for the Journal of Systems and Software, and a conference correspondent for IEEE Software.


Currently maintained by Haiping Lu

URL: http://www.dcs.shef.ac.uk/~haiping/mlseminar.html