Working with neuroscience data in the Python ecosystem

Friday, November 12, 2021 - 02:20 pm
Storey Innovation Center 1400

Meeting Location:

Storey Innovation Center 1400

Live Meeting Link for the virtual audience

Talk Abstract:  About 15 years ago, as I was working on a graphical interface for scientific software in Matlab, I got frustrated by the clumsy code structure that Matlab required for GUI coding. Although I thought that the C++ Qt library would be a great alternative, I did not want to get my fast-prototyping process slowed by low-level coding. Since Python had bindings for Qt, I decided to translate all my code into Python. To my surprise, I was able to swiftly complete this process over the weekend. Since then, I have been working almost exclusively in Python, and I never regretted it a single day. In this talk, I will the main components of the Python stack for scientific programming, focusing on neuroscience and illustrating it by summarily analyzing EEG recordings (MNE-Python). I will discuss why Python has become a major player in this field and how limitations typical to interpreted languages (e.g., slow at runtime) have been tackled with libraries such as NumPy. I will also explain why Python is a strong environment for data wrangling by introducing libraries like Pandas – which offers data frame functionalities similar to R – and XArray. Finally, I will touch upon how libraries like Seaborn provide a high-level interface for quickly producing publication-quality figures with only a few (if not a single) lines of code.

 

Speaker's Bio: Christian O’Reilly received his B.Ing (elec eng; 2007), his M.Sc.A. (biomed eng; 2011), and his Ph.D. (biomed eng; 2012) from Polytechnique Montreal. He was a postdoc fellow at the CARSM (2012-2014) and then a NSERC postdoc fellow at McGill's Brain Imaging Center (2014-2015) where he worked on EEG sleep transients. He also worked at the EPFL (2015-2018) on modeling of the thalamocortical loop and at McGIll on brain connectivity (2020-2021). Since 2021, he is Assistant Professor at UofSC.

Mind the Gap: What lies between the end of CMOS scaling and future technologies?

Friday, November 5, 2021 - 02:20 pm
Storey Innovation Center 1400

Meeting Location:

Storey Innovation Center 1400 Live Meeting Link for the virtual audience :

https://teams.microsoft.com/l/meetup-join/19%3ameeting_OWZkMTYwZDQtMDVmMy00NjA1LTgxNmEtZDExMDdiZTM2ZjYz%40thread.v2/0?context=%7b%22Tid%22%3a%224b2a4b19-d135-420e-8bb2-b1cd238998cc%22%2c%22Oid%22%3a%225fc2170a-7068-4a33-9021-df11b94ba696%22%7d

Talk Abstract: David’s talk will cover some of the exciting technologies the Devices, Circuits & Systems group at ARM is researching as well as what he sees in the general trend and future of process technology. And since it’s not possible to discuss CMOS scaling without commenting on Moore’s law, he will do that, too. 🙂

 

Speaker's Bio: David Pietromonaco has been in the semiconductor industry for almost 30 years at Hewlett-Packard, Sony, and most recently Artisan/Arm (for 20 of those). He works in Arm Research; in the Devices, Circuits & Systems group, specifically on the Technology Optimized Design team. That team tries to look 5-10 years ahead to understand future computing technologies and how to utilize them.

Correct Web Service Transactions in the Presence of Malicious and Misbehaving Transactions 

Monday, November 1, 2021 - 01:30 pm
Online

DISSERTATION DEFENSE

                                                                                  Department of Computer Science and Engineering

University of South Carolina 

Author : John Ravan

Advisor : Dr. Csilla Farkas

Date : November 1, 2021

Time : 1:30pm

Place : Virtual Defense

Join Zoom Meeting

https://citadelonline.zoom.us/j/5257755660?pwd=dzBwNW85RUdSRjVWdGp4RzRxbzE2UT09

 

Abstract

 

Concurrent database transactions within a web service environment can cause a variety of problems without the proper concurrency control mechanisms in place. A few of these problems involve data integrity issues, deadlock, and efficiency issues. Even with today's industry standard solutions to these problems, they have taken a reactive approach rather than proactively preventing these problems from happening. We deliver a twofold solution that presents a proactive prediction-based approach to ensure consistency while keeping execution time the same or faster than current industry solutions. The first part of this solution involves prototyping and formally proving a prediction-based scheduler. 

The prediction-based scheduler leverages a prediction-based metric that promotes transactions with reliable reputations based on the transaction's performance metric. This performance metric is based on the transaction's likelihood to commit and its efficiency within the system. We can then predict the outcome of the transaction based on the metric and apply customized lock behaviors to address consistency issues in current web service environments. We have formally proven that the solution will increase consistency among web service transactions without a performance degradation that is worse than industry standard 2PL. The simulation was developed using a multi-threaded approach to simulate concurrent transactions.  Experimentation results show that the solution works comparatively with industry solutions with the added benefit of ensured consistency in some cases and deadlock avoidance in others. This work has been published in IEEE Transactions on Services Computing. 

The second part of the solution involves building the prediction-based metric mentioned previously. In the initial solution we assumed the prediction-based categorization coming into the solution in order to prove the feasibility and correctness of a prediction-based scheduler. 

Once that was established, we extended the four-category solution to a dynamic reputation score built upon transactional attributes. The attributes used in the reputation score are system abort ranking, user abort ranking, efficiency ranking, and commit ranking. With these four attributes we were able to establish a dynamic dominance structure that allowed for a transaction to promote or demote itself based on its performance within the system. This work has been submitted to ACM Transactions on Information Systems and awaiting review. 

Both phases provide a complete solution of prediction-based transaction scheduling that provides dynamic categorization no matter the transactional environment. 

Future work of this system would involve extending the prediction-based solution to a multi-level secure database with an added dimension. The dimension provides a security classification in addition to attributes for dynamic reputation that allows for transactions to establish dominance. The goal would be to prevent covert timing channels that occur in multi-level secure database systems due to the differing classifications. Our reputation score would provide a cover story for timing differences of transactions of different security levels to allow for a more robust scheduling algorithm. This would allow for high security transactions to gain priority over low security transactions without exposing a covert timing channel. 

AI & The Automation of Labor

Friday, October 29, 2021 - 02:20 pm
Storey Innovation Center 1400

Live Meeting Link for the virtual audience :



Talk Abstract: In the past 10 years, we have seen the rise of SAAS (Software As A Service). We have seen SAAS take over many of the existing businesses. Amazon, NetFlix, Expedia are household name SAAS-operated businesses that replaced traditional ones As Machine Learning (ML) and Artificial Intelligence (AI) rise, we are also seeing lots of jobs replaced and automated by machines. This is happening at a much faster pace than anticipated. AI is also replacing jobs that were once thought to be securely dominated by humans. These were jobs that require some form of human intellect We will be discussing what AI really is beyond what the media defines it to be. We will also discuss the implication of this automation on society & the labor market. We will try to show that AI, contrary to the latest media scare, is going to bring an era of unprecedented productivity gains and prosperity. Something comparable to what the industrial revolution brought a few centuries back. But that requires us to be prepared as a society

 


Speaker's Bio: 

Ahmad Abdulkader is a well-renowned industry expert, with over 50 publications and patents, in Machine Learning and Artificial Intelligence. 

https://scholar.google.com/citations?user=HZxrGFIAAAAJ&hl=en&fbclid=IwAR2DiNkhUYa-Z2NG04vTLsKZcL61qNUEelFlokIjkUfepy3D8F-fh79RzZE

 

Ahmad is currently a Distinguished Scientist at Facebook AI Applied Research. Ahmad invented DeepText, a Deep-Learning Text Understanding Platform that is widely used throughout FB and the open-source community.

https://engineering.fb.com/2016/06/01/core-data/introducing-deeptext-facebook-s-text-understanding-engine/

 

Prior to Facebook, Abdulkader was the co-founder & CTO of Voicea.ai which was acquired by Cisco in 2019. Voicea built a widely-used meetings platform that became part of Cisco's WebEx.

https://www.amazon.com/Attracting-technical-co-founders-corporate-fundraising/dp/B08KVDHRN8

 

Ahmad also worked for Google where he built the Optical Character Recognition and verification of Google's BookSearch. Ahmad is one of the main contributors to Tesseract: The most widely used open-source OCR Engine.

https://github.com/tesseract-ocr/tesseract/blob/master/AUTHORS

 

In addition, Ahmad was one of the pioneers of StreetView at Google. Ahmad was one of the main creators of StreetSmart; A Computer Vision platform for privacy protection and Scene Understanding in StreetView.

https://research.google/pubs/pub35481/

 

At Microsoft corporation, Ahmad was one of the pioneers of the Handwriting Recognition Technology that powers the Microsoft Surface devices. 

https://www.researchgate.net/publication/255619869_Personalization_of_an_Online_Handwriting_Recognition_System

 

Ahmad also is one of the earliest contributors to Arabic OCR & Handwriting Recognition. Ahmad is the co-inventor of the first Arabic OCR Engine (ICRA) in 1994

https://org.uib.no/smi/ksv/ArabOCR.html

 

Ahmad studied at Cairo University where he got his B.Sc. and M.Sc. in Electrical Engineering and at McMaster University & University of Washington where he got his M.Sc. & Ph.D. in Computer Science.

Semantics-Aware Anomaly Detection for Smart Homes

Friday, October 22, 2021 - 02:20 pm
Storey Innovation Center 1400

Meeting Location:

Storey Innovation Center 1400

Live Meeting Link for the virtual audience


Speaker's Bio: Dr. Qiang Zeng is an Assistant Professor in the CSE department at the University of South Carolina. He received his Ph.D. in Computer Science and Engineering from Penn State University. His main interest is Computer Systems Security, with a focus on the Internet of Things and Mobile Computing. He is also interested in Adversarial Machine Learning. He publishes his work in CCS, USENIX Security, NDSS, MobiCom, MobiSys, PLDI, etc.

Talk Abstract: As IoT devices are integrated via automation and coupled with the physical environment, anomalies in an appified smart home, whether due to attacks or device malfunctions, may lead to severe consequences. Prior works that utilize data mining techniques to detect anomalies suffer from high false alarm rates and missing many real anomalies. Our observation is that data mining-based approaches miss a large chunk of information about automation programs (also called smart apps) and device relations. We propose Home Automation Watcher (HAWatcher), a semantics-aware anomaly detection system for appified smart homes. HAWatcher models a smart home’s normal behaviors based on both event logs and semantics. Given a home, HAWatcher generates hypothetical correlations according to semantic information, such as apps, device types, relations and installation locations, and verifies them with event logs. The mined correlations are refined using correlations extracted from the installed smart apps. The refined correlations are used by a Shadow Execution engine to simulate the smart home’s normal behaviors. During runtime, inconsistencies between devices’ real-world states and simulated states are reported as anomalies. We evaluate our prototype on the SmartThings platform in four real-world testbeds and test it against totally 62 different anomaly cases. The results show that HAWatcher achieves high accuracy, significantly outperforming prior approaches.

AI Explainability

Thursday, October 21, 2021 - 10:00 am
Seminar Room, AI Institute, 1112 Greene St, Columbia (5th Floor; Science & Technology Building) 

Two Lectures on AI Explainability 

As part of Trusted AI course in Fall 2021 by Prof. Biplav Srivastava https://sites.google.com/site/biplavsrivastava/research-1/trustedai  

Oct 19, Tuesday, 10:00-11:15 am - Talk 

Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd  

 Oct 21, Tuesday, 10:00-11:15 am – Talk and Working Session 

Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd  

On campus class at Seminar Room, AI Institute, 1112 Greene St, Columbia (5th Floor; Science & Technology Building) 

 

Speakers: Dr. Diptikalyan Saha (Dipti) and Dr. Vijay Arya 

 

Speakers Bio

Dr. Diptikalyan Saha (Dipti) is a Senior Technical Staff Member and manager of Reliable AI team in Data&AI department of IBM Research at Bangalore. His research interest includes Artificial Intelligence, Natural Language Processing, Knowledge representation, Program Analysis, Security, Software Debugging, Testing, Verification, and Programming Languages. He received a  Ph.D. degree in Computer Science from the State University of New York at Stony Brook his B.E. degree in Computer Science and Engineering from Jadavpur University. His group’s work on Bias in AI Systems is available through AI OpenScale in IBM Cloud as well as through open-source AI Fairness 360.  

 

 

Vijay Arya is a senior researcher in IBM Research AI at the IBM India Research Lab where he works on problems related to Trusted AI. Vijay has 15 years of combined experience in research and software development. His research work spans Machine learning, Energy & smart grids, network measurements & modeling, wireless networks, algorithms, and optimization. His work has received outstanding technical achievement awards at IBM and has been deployed by power utilities in USA. Before joining IBM, Vijay worked as a researcher at National ICT Australia (NICTA) and received his PhD in Computer Science from INRIA, France, and a Masters from Indian Institute of Technology (IIT) Delhi. He has served on the program committees of IEEE, ACM, and IFIP conferences, he is a senior member of IEEE & ACM, and has more than 60 conference & journal publications and patents. 

AI Explainability 

Tuesday, October 19, 2021 - 10:00 am
Seminar Room, AI Institute, 1112 Greene St, Columbia (5th Floor; Science & Technology Building) 

Two Lectures on AI Explainability 

As part of Trusted AI course in Fall 2021 by Prof. Biplav Srivastava https://sites.google.com/site/biplavsrivastava/research-1/trustedai  

Oct 19, Tuesday, 10:00-11:15 am - Talk 

Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd  

 

Oct 21, Tuesday, 10:00-11:15 am – Talk and Working Session 

Blackboard link: https://us.bbcollab.com/guest/f567247c101145cebc6eaa937af2cecd  

 

On campus class at Seminar Room, AI Institute, 1112 Greene St, Columbia (5th Floor; Science & Technology Building) 

 

Speakers: Dr. Diptikalyan Saha (Dipti) and Dr. Vijay Arya 

 

Speakers Bio

Dr. Diptikalyan Saha (Dipti) is a Senior Technical Staff Member and manager of Reliable AI team in Data&AI department of IBM Research at Bangalore. His research interest includes Artificial Intelligence, Natural Language Processing, Knowledge representation, Program Analysis, Security, Software Debugging, Testing, Verification, and Programming Languages. He received a  Ph.D. degree in Computer Science from the State University of New York at Stony Brook his B.E. degree in Computer Science and Engineering from Jadavpur University. His group’s work on Bias in AI Systems is available through AI OpenScale in IBM Cloud as well as through open-source AI Fairness 360.  

 

Vijay Arya is a senior researcher in IBM Research AI at the IBM India Research Lab where he works on problems related to Trusted AI. Vijay has 15 years of combined experience in research and software development. His research work spans Machine learning, Energy & smart grids, network measurements & modeling, wireless networks, algorithms, and optimization. His work has received outstanding technical achievement awards at IBM and has been deployed by power utilities in USA. Before joining IBM, Vijay worked as a researcher at National ICT Australia (NICTA) and received his PhD in Computer Science from INRIA, France, and a Masters from Indian Institute of Technology (IIT) Delhi. He has served on the program committees of IEEE, ACM, and IFIP conferences, he is a senior member of IEEE & ACM, and has more than 60 conference & journal publications and patents. 

Sampling and Robustness in Multi-Robot Visibility-Based Pursuit-Evasion

Tuesday, October 19, 2021 - 09:00 am
Meeting Room 2265, Innovation Center

DISSERTATION DEFENSE

 Department of Computer Science and Engineering

University of South Carolina


Author : Trevor Olsen
Advisor : Dr. Jason O'Kane
Date : Oct 19, 2021
Time : 9:00am
Place : Meeting Room 2265, Innovation Center

  Abstract

Given a two-dimensional polygonal space, the multi-robot visibility-based pursuit-evasion problem tasks several pursuer robots with the goal of establishing visibility with an arbitrarily fast evader. The best-known complete algorithm for this problem takes time doubly exponential in the number of robots. However, sampling-based techniques have shown promise in generating feasible solutions in these scenarios.

Existing sampling-based algorithms have long execution times and high failure rates for complex environments. We first address that limitation by proposing a new algorithm that takes an environment as its input and returns a joint motion strategy which ensures that the evader is captured by one of the pursuers. Starting with a single pursuer, we sequentially construct data structures called Sample-Generated Pursuit-Evasion Graphs to create such a joint motion strategy. This sequential graph structure ensures that our algorithm will always terminate with a solution, regardless of the complexity of the environment.

Another aspect of this problem that has yet to be explored concerns how to ensure that the robots can recover from catastrophic failures which leave one or more robots unexpectedly incapable of continuing to contribute to the pursuit of the evader. To address this issue, we propose an algorithm that can rapidly recover from catastrophic failures. When such failures occur, a replanning occurs, leveraging both the information retained from the previous iteration and the partial progress of the search completed before the failure to generate a new motion strategy for the reduced team of pursuers.

The final contribution is a novel formulation of the pursuit-evasion problem that modifies the pursuers' objective by requiring that the evader still be detected, even in spite of the malfunction of any single pursuer robot. This novel constraint, whereby two pursuers are required to detect an evader, has the benefit of providing redundancy to the search, should any member of the team become unresponsive, suffer temporary sensor disruption/failure, or otherwise become incapacitated. The proposed formulation produces plans that are inherently tolerant of some level of disturbance.

For each contribution discussed above, we describe an implementation of the algorithm and provide quantitative results that show substantial improvement over existing results.

Neuromorphic Computing from the Computer Science Perspective: Algorithms and Applications

Friday, October 15, 2021 - 02:20 pm
Storey Innovation Center 1400

Meeting Location:

Storey Innovation Center 1400

Live Virtual Meeting Link

Speaker's Bio: Catherine (Katie) Schuman is a research scientist at Oak Ridge National Laboratory (ORNL). She received her Ph.D. in Computer Science from the University of Tennessee (UT) in 2015, where she completed her dissertation on the use of evolutionary algorithms to train spiking neural networks for neuromorphic systems. She is continuing her study of algorithms for neuromorphic computing at ORNL. Katie has an adjunct faculty appointment with the Department of Electrical Engineering and Computer Science at UT, where she co-leads the TENNLab neuromorphic computing research group. Katie received the U.S. Department of Energy Early Career Award in 2019.

Talk Abstract: Neuromorphic computing is a popular technology for the future of computing.  Much of the focus in neuromorphic computing research and development has focused on new architectures, devices, and materials, rather than in the software, algorithms, and applications of these systems.  In this talk, I will overview the field of neuromorphic from the computer science perspective.  I will give an introduction to spiking neural networks, as well as some of the most common algorithms used in the field.  Finally, I will discuss the potential for using neuromorphic systems in real-world applications from scientific data analysis to autonomous vehicles.

Collaborative Assistants For The Society

Friday, October 15, 2021 - 08:00 am
1112 Greene Street 5th Floor, Columbia, SC 29208

CASY + {Hack@Home} will take place on October 15, 2021 and

CASY 2.0 will take place on February 11, 2022

The event will be free-to-attend once registered and is intended to promote the ethical usage of digital assistants in society for daily life activities.

See casy.aiisc.ai or the event page for more information.