Bosch Research at the Conference on Neural Information Processing Systems
Bosch Research will present the latest research findings in the field of artificial intelligence and machine learning at NeurIPS.
December 10 - 16, 2023
New Orleans, USA
The 37th on Neural Information Processing Systems (NeurIPS 2023) will take place from December 10 to 16, 2023, in New Orleans, USA. The focus will be on machine learning and computational neuroscience.
The conference features invited talks, as well as oral and poster presentations of refereed papers. Following the main event, parallel-track workshops are conducted.
Bosch Research will showcase its latest research findings through various presentations and at its own exhibition booth:
- “Wasserstein Gradient Flows for Optimizing Gaussian Mixture Policies”
- “Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift”
- “Learning Sample Difficulty from Pre-trained Models for Reliable Prediction”
- “Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures”
- “Controlling Text-to-Image Diffusion by Orthogonal Finetuning”
- “Pseudo-Likelihood Inference”
- “Zero-Shot Anomaly Detection via Batch Normalization”
- “Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning”
- “Text-driven Prompt Generation for Vision-Language Models in Federated Learning”
- “Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity”
- Language Models are Weak Learners
- GradOrth: A Simple yet Efficient Out-of-Distribution Detection with Orthogonal Projection of Gradients
- Neural Functional Transformers
- Provably Bounding Neural Network Preimages
- UP-DP: Unsupervised Prompt Learning for Data Pre-Selection with Vision-Language Models
- 3D Copy-Paste: Physically Plausible Object Insertion for Monocular 3D Detection
- One-Step Diffusion Distillation via Deep Equilibrium Models
- Permutation Equivariant Neural Functionals
- Learning with Explanation Constraints
- On the Importance of Exploration for Generalization in Reinforcement Learning
- Deep Equilibrium Based Neural Operators for Steady-State PDEs
- Finding Safe Zones of Markov Decision Processes Policies