Craig Innes

Introduction / Motivation

I am a Chancellor's Fellow within the Institute for Perception, Action and Behaviour (IPAB) at the University of Edinburgh. I'm interested in guarantees on safety, trustworthiness and risk for cyber-physical systems. In particular - how can we combine traditional models of symbolic-logic and formal verification, with newer black-box probabilistic models seen in modern complex machine learning systems?

Previously, I worked as a postdoctoral researcher in the UoE Robust Autonomy and Decisions Group (RAD). Before that, I did a Phd in the Institute for Language, Cognition and Computation (ILCC), which was all about the semantics of unawarness (or "unknown unknowns"). Its still available here if you're interested.

In my spare time I used to design small video games. You can find a list of them on my Itch.io page. If you're interested in making games, I suggest using Godot then joining a Game Jam to get inspired.

Fully Funded PhD Positions Available

I currently have funding for PhD students (home or international) at the University of Edinburgh on the topic safety specifications in Hybrid AI Systems. Further details here . Feel free to email me if you have further questions.

I also wrote a short FAQ for applying to study a PhD at Edinburgh

Related Work

Conference Papers

  • Innes, C., & Ramamoorthy, S. (2023). Testing rare downstream safety violations via upstream adaptive sampling of perception error models. In International Conference on Robotics and Automation (ICRA)
  • Lahariya, M., & Innes, C., & Develder, C., & Ramamoorthy, S. (2022). Learning physics-informed simulation models for soft robotic manipulation: A case study with dielectric elastomer actuators. In International Conference on Intelligent Robots and Systems (IROS)
  • Corso, A., & Katz, S., & Innes, C., & Du, X., & Ramamoorthy, S., & Kochenderfer, M. (2022). Risk-driven design of perception systems. In Conference on Neural Information Processing Systems (NeurIPS)
  • Innes, C., & Ramamoorthy, S. (2022). Automated testing with temporal logic specifications for robotic controllers using adaptive experiment design. In International Conference on Robotics and Automation (ICRA)
  • Viano, L., & Huang, Y., & Kamalaruban, P., & Innes, C., & Ramamoorthy, S., & Weller, A. (2022). Robust Learning from Observation with Model Misspecification. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS)
  • Innes, C., & Ramamoorthy, S. (2021). ProbRobScene: A Probabilistic Specification Language for 3D Robotic Manipulation Environments. In International Conference on Robotics and Automation
  • Innes, C., & Ramamoorthy, S. (2020). Elaborating on Learned Demonstrations with Temporal Logic Specifications. In Robotics Science and Systems
  • Innes, C., & Lascarides, A. (2019). Learning Factored Markov Decision Processes with Unawareness. In Uncertainty in Artificial Intelligence
  • Innes, C., & Lascarides, A. (2019). Learning Structured Decision Problems with Unawareness. In International Conference on Machine Learning

Journal Papers

  • Burke, M., & Lu, K., & Angelov, D., & Straižys, A., & Innes, C., & Subr, K., & Ramamoorthy, S. (2023). Learning robotic ultrasound scanning using probabilistic temporal ranking. In Autonomous Robots

Workshop Papers and Extended Abstracts

  • Innes, C., & Ireland, A., & Lin, Y., & Ramamoorthy, S. (2023). Anticipating Accidents through Reasoned Simulation . In International Symposium on Trustworthy Autonomous Systems
  • Innes, C., & Hristov, Y., & Kamaras, G., & Ramamoorthy, S. (2021). Automatic Synthesis of Experiment Designs from Probabilistic Environment Specifications . In 10th Workshop on Synthesis (SYNT) at the International Conference on Computer Aided Verification (CAV)
  • Innes, C., & Lascarides, A. (2019). Learning Factored Markov Decision Processes with Unawareness - Extended Abstract . In International Conference on Autonomous Agents and Multiagent Systems

References

You can email me at [my-first-name] [dot] [my-last-name] [at] ed [dot] ac [dot] uk