Standard model for AI

Standard model for AI is human specifies the objective like here’s the discounted sum of rewards machine says okay I’m on it right now we know this model doesn’t work because we can’t specify the objective correctly and I should mention it’s not just our model this is the same model as control theory where you minimize the cost function in statistics you minimize risk in operations research in fact this is borrowed from operations research you maximize the discount of some rewards and economics you maximize a welfare function or GDP or profit or whatever objective it is all of these are the same basic standard model and it’s a bad model and King Midas could tell you that because he said okay here’s my objective I want everything I touch the turn to gold the machine said righty-ho in fact it was the gods in his case.

Deep Learning for System 2 Processing

Slides: https://photos.app.goo.gl/KfQjX5FqcGEsVEs49

1:09:37 [Talk: Deep Learning for System 2 Processing by Yoshua Bengio] 1:10:10 No-Free-Lunch Theorem, Inductive Biases Human-Level AI 1:15:03 Missing to Extend Deep Learning to Reach Human-Level AI 1:16:48 Hypotheses for Conscious Processing by Agents, Systematic Generalization 1:22:02 Dealing with Changes in Distribution 1:25:13 Contrast with the Symbolic AI Program 1:28:07 System 2 Basics: Attention and Conscious Processing 1:28:19 Core Ingredient for Conscious Processing: Attention 1:29:16 From Attention to Indirection 1:30:35 From Attention to Consciousness 1:31:59 Why a Consciousness Bottleneck? 1:33:07 Meta-Learning: End-to-End OOD Generalization, Sparse Change Prior 1:33:21 What Causes Changes in Distribution? 1:34:56 Meta-Learning Knowledge Representation for Good OOD Performance 1:35:14 Example: Discovering Cause and Effect 1:36:49 Operating on Sets of Pointable Objects with Dynamically Recombined 1:37:36 RIMS: Modularize Computation and Operate on Sets of Named and Typed Objects 1:39:42 Results with Recurrent Independent Mechanisms 1:40:17 Hypotheses for Conscious Processing by Agents, Systematic Generalization 1:40:46 Conclusions

Self-Supervised Learning

Slides: https://photos.app.goo.gl/9JwsDo8wMZatpj9w7

36:04 [Talk: Self-Supervised Learning by Yann LeCun] 36:25 What is Deep Learning? 38:37 Supervised Learning works but requires many labeled samples 39:25 Supervised DL works amazingly well, when you have data 40:05 Supervised Symbol Manipulation 41:50 Deep Learning Saves Lives 43:40 Reinforcement Learning: works great for games and simulations. 45:12 Three challenges for Deep Learning 47:39 How do humans and animals learn so quickly? 47:43 Babies learn how the world works by observation 48:43 Early Conceptual Acquisition in Infants [from Emmanuel Dupoux] 49:33 Prediction is the essence of Intelligence 50:28 Self-Supervised Learning = Filling in the Blanks 50:53 Natural Language Processing: works great! 51:55 Self-Supervised Learning for Video Prediction 52:09 The world is stochastic 52:43 Solution: latent variable energy-based models 53:55 Self-supervised Adversarial Learning for Video Prediction 54:12 Three Types of Learning 55:30 How Much Information is the Machine Given during Learning? 55:54 The Next Al Revolution 56:23 Energy-Based Models 56:32 Seven Strategies to Shape the Energy Function 57:02 Denoising AE: discrete 58:44 Contrastive Embedding 1:00:39 MoCo on ImageNet 1:00:52 Latent-Variable EBM for inference & multimodal prediction 1:02:07 Learning a (stochastic) Forward Model for Autonomous Driving 1:02:26 A Forward Model of the World 1:04:42 Overhead camera on highway. Vehicles are tracked 1:05:00 Video Prediction: inference 1:05:15 Video Prediction: training 1:05:30 Actual, Deterministic, VAE+Dropout Predictor/encoder 1:05:57 Adding an Uncertainty Cost (doesn’t work without it) 1:06:01 Driving an Invisible Car in “Real” Traffic 1:06:51 Conclusions

Stacked Capsule Autoencoders

Slides: https://photos.app.goo.gl/3QRXXtY4hUvKpEkF7

03:09 Two approaches to object recognition 03:53 Problems with CNNs: Dealing with viewpoint changes 04:42 Equivariance vs Invariance 05:25 Problems with CNNs 10:04 Computer vision as inverse computer graphics 11:55 Capsules 2019: Stacked Capsule Auto-Encoders 13:21 What is a capsule? 14:58 Capturing intrinsic geometry 15:37 The generative model of a capsule auto-encoder 20:28 The inference problem: Inferring wholes from parts 21:44 A multi-level capsule auto-encoder 22:30 How the set transformer is trained 23:14 Standard convolutional neural network for refining word representations based on their context 23:41 How transformers work 24:43 Some difficult examples of MNIST digits 25:20 Modelling the parts of MNIST digits 27:03 How some of the individual part capsules contribute to the reconstructions 28:37 Unsupervised clustering of MNIST digits using stacked capsule autoencoders 31:25 The outer loop of vision 31:36 Dealing with real 3-D images 32:51 Conclusion

Visual Studio Online

Visual Studio Online provides cloud-powered development environments for any activity – whether it’s a long-term project, or a short-term task like reviewing a pull request. You can work with these environments from Visual Studio Code, Visual Studio (sign up for the Private Preview), or a browser-based editor that’s accessible anywhere! You can even connect your own self-hosted environments to Visual Studio Online at no cost.

Additionally, Visual Studio Online brings many of the benefits of DevOps, like repeatability and reliability, which have typically been reserved for production workloads, to development environments. However, Visual Studio Online is also personaliazable to allow developers to leverage the tools, processes and configurations that they have come to love and rely on – truly the best of both worlds!

Learn more at online.visualstudio.com.

Because of it’s connection to github, it can be a oneNote alternate as well