Valohai ML platform logo

Patch note

February 12, 2020

← View all patch notes

New features (Core)

  • Bayesian hyperparameter optimization based on hyperopt.
  • You can now create tasks that employ guided hyperparameter optimization. Simply configure how many trials to run in total, how many to run in parallel and the variation ranges for each of your parameters – as easy as before, but now with extra smarts! This feature is in beta – please request access via support.
  • You can now set a memory request for your deployment endpoints. This may help if Kubernetes is unduly evicting your workloads when they start crunching too many numbers for the cluster's liking.

New features and bugfixes (Agent)

These features appear in agent version 0.28.2 – see the first line of your execution log to see which version your environment is running.

  • Largest files are now uploaded first, with the idea that complete models should be the largest file and you can Hard Stop an execution after it's done.
  • On a similar note, there's an upper limit for output files now. Live-uploaded files do not count towards this limit – and if you really do need to upload tons of files, it's better to package them into e.g. a tar file.
  • Run initialization was made faster thanks to advanced techniques, namely not just sleeping as much.
  • Some upload errors are now handled more gracefully.
  • A new, more thorough pre-and-post-run cleaning mode was added.
  • Advanced users can now tune the shared memory (SHM) size for the container.
  • Containers are now given more time to finish up what they're doing after an initial kill request. (Your code will receive a SIGINT signal first – that's catch KeyboardInterrupt: in Python parlance.)
  • Trying to pull a Docker image that doesn't actually exist now yields an actually useful error message.