Announcing Keynote Speaker!A Distinguished Scientist known for his groundbreaking contributions to the field of Artificial Intelligence |
||
Silver Professor at the Courant Institute, New York University & Vice President and Chief AI Scientist at Meta
|
|
|
Abstract: TBA Biography: Yann LeCun is VP and Chief AI Scientist at Meta and Silver Professor at NYU affiliated with the Courant Institute and the Center for Data Science. He was the founding Director of Facebook AI Research and of the NYU Center for Data Science. He received an EE Diploma from ESIEE (Paris) in 1983, a PhD in Computer Science from Sorbonne Université (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU in 2003 after a short tenure at the NEC Research Institute. In late 2013, LeCun became Director of AI Research at Facebook, while remaining on the NYU Faculty part-time. He was visiting professor at Collège de France in 2016. His research interests include machine learning and artificial intelligence, with applications to computer vision, natural language understanding, robotics, and computational neuroscience. He is best known for his work in deep learning and the invention of the convolutional network method which is widely used for image, video and speech recognition. He is a member of the US National Academy of Sciences, National Academy of Engineering, and the French Académie des Sciences, a Chevalier de la Légion d’Honneur, a fellow of AAAI and AAAS, the recipient of the 2022 Princess of Asturias Award, the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award, the 2016 Lovie Award for Lifetime Achievement, the University of Pennsylvania Pender Award, and honorary doctorates from IPN, Mexico, EPFL, and Université Côte d’Azur. He is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing”. |
Cryptographic Acceleration |
||
Abstract: In this talk, I will discuss some crypto acceleration techniques, namely algorithms, software optimizations and processor instructions, and show how they have changed the performance characteristics of symmetric key and public key cryptographic schemes and have impacted the selection of schemes in protocols such as TLS. Examples include, AES-GCM, AES-GCM-SIV, RSA, ECDSA with NIST P-256 curve. I will explain recent developments where crypto acceleration instructions appear in “vectorized” (SIMD) versions that support processing up to 4 independent input streams in parallel, and additional instructions, namely GF-NI, that have been added to x86-64 architectures and can be useful as building blocks for symmetric key cryptography. |
|
Professor at University of Haifa and Distinguished Engineer at Meta
|
Biography: Shay Gueron is a professor of mathematics at the University of Haifa, specializes in applied cryptography and security. He is now leading cybersecurity program at the university’s Business School. From 2005 to 2017 Shay was an Intel Senior Principal Engineer, serving as the Chief Core Cryptography Architect of the CPU Architecture Group. From 2017 to 2023nhe served as a Senior Principal Engineer at AWS. In 2023 Shay joined Meta as a Distinguished Engineer. Shay’s interests include cryptography, security, and algorithms. He is responsible for some of the recent CPU instructions that speed up cryptographic algorithms, such as the AES-NI and the PCLMULQDQ instructions, VPMADD52, vector AES, vector PCLMULQDQ, GF-NI, and for some micro architectural enhancements in Intel’s the Big Cores. He has contributed software to open source libraries, such as OpenSSL, BoringSSL and NSS, offering significant performance gains to encryption, authenticated encryption, public key algorithms, and hashing. He was one of the architects of Intel Software Guard Extensions (SGX), in charge of the cryptographic definition and implementation of SGX, and the inventor of the Memory Encryption Engine starting from Architecture Codename Skylake. Shay worked on definitions and optimizations for cloud scale encryption, where he contributed a mode of operation for AWS’s Key Management Service, to the key AES-GCM commitment in AWS Encryption SDK. He is one of the team members of the post quantum KEM named BIKE, which is being considered by NIST for standardization as a KEM alternative. Together with co-authors Yehuda Lindell (Bra-Ilan university) and Adam Langley (Google), Shay defined the nonce misuse resistant AEAD named AES-GCM-SIV, which is now RFC8452. At Meta, Shay defined the DNDK-GCM mode of operation, SIV-MAC and Double-Polyval-MAC for use at the cloud scale. |
Scalable Vector Analytics: A Story of Twists and Turns |
||
Abstract: Similarity search in high-dimensional data spaces was a relevant and challenging data management problem in the early 1970s, when the first solutions to this problem were proposed. Today, fifty years later, we can safely say that the exact same problem is more relevant (from Time Series Management Systems to Vector Databases) and challenging than ever. Very large amounts of high dimensional data are now omnipresent (ranging from traditional multidimensional data to time series and deep embeddings), and the performance requirements (i.e., response-time and accuracy) of a variety of applications that need to process and analyze these data have become very stringent and demanding. In these past fifty years, high-dimensional similarity search has been studied in its many flavors. Similarity search algorithms for exact and approximate, one off and progressive query answering. Approximate algorithms with and without (deterministic or probabilistic) quality guarantees. Solutions for on-disk and in-memory data, static and streaming data. Approaches based on multidimensional space-partitioning and metric trees, random projections and locality-sensitive hashing (LSH), product quantization (PQ) and inverted files, k-nearest neighbor graphs and optimized linear scans. Surprisingly, the work on data series (or time-series) similarity search has recently been shown to achieve the state-of-the-art performance for several variations of the problem, on both time-series and general high-dimensional vector data. In this talk, we will touch upon the different aspects of this interesting story, present some of the state-of-the-art solutions, and discuss open research directions. |
Distinguished Professor |
|
Biography:Themis Palpanas is an elected Senior Member of the French University Institute (IUF), a distinction that recognizes excellence across all academic disciplines, and Distinguished Professor of computer science at the University Paris Cite (France), where he is director of the Data Intelligence Institute of Paris (diiP), and director of the data management group, diNo. He received the BS degree from the National Technical University of Athens, Greece, and the MSc and PhD degrees from the University of Toronto, Canada. He has previously held positions at the University of California at Riverside, University of Trento, and at IBM T.J. Watson Research Center, and visited Microsoft Research, and the IBM Almaden Research Center. His interests include problems related to data science (big data analytics and machine learning applications). He is the author of 14 patents. He is the recipient of 3 Best Paper awards, and the IBM Shared University Research (SUR) Award. His service includes the VLDB Endowment Board of Trustees (2018-2023), Editor-in-Chief for PVLDB Journal (2024-2025) and BDR Journal (2016-2021), PC Chair for IEEE BigData 2023 and ICDE 2023 Industry and Applications Track, General Chair for VLDB 2013, Associate Editor for the TKDE Journal (2014-2020), and Research PC Vice Chair for ICDE 2020. |
Machine Learning Techniques for Data Reduction of Scientific Applications |
||
Distinguished Professor in the Department of Computer Information Science and Engineering, University of Florida |
Abstract: Scientific applications from high energy physics, nuclear physics, radio astronomy, and light sources generate large volumes of data at high velocity and are increasingly outpacing the growth of computing power, network, storage bandwidths and capacities. Furthermore, this growth is also seen in next-generation experimental and observational facilities, making data reduction or compression an essential stage of future computing systems. Scientists are principally interested in downstream quantities called Quantities of Interest (QoI) that are derived from raw data. Thus, it is important that the methods quantify the impact of data reduction not only on the primary data (PD) outputs but also on QoI. The ability to quantify these with realistic numerical bounds is essential if the scientist is to have confidence in applying data reduction. |
|
Biography: Sanjay Ranka is a Distinguished Professor in the Department of Computer Information Science and Engineering at the University of Florida. From 1999-2002, as the Chief Technology Officer at Paramark (Sunnyvale, CA). Paramark was recognized by VentureWire/Technologic Partners as a Top 100 Internet technology company in 2001 and 2002 and was acquired in 2002. Sanjay has also held positions as a tenured faculty member at Syracuse University, academic visitor at IBM and summer researcher at Hitachi America Limited. |