Environment Support on Compute Instance: You can now use the same image when running a full job on a cluster or an experiment on a compute instance.
Model Packaging (v2): You can now build model packages to deploy to Online Endpoints through the Azure Machine Learning inference server or a custom inference server of your choice.
Label pixels in images through Semantic Segmentation: You can now add tags or labels to individual pixels within images, leverage the vendor workforce in labeling the images, and label categories through hierarchical labeling.
New base inference models with finetuning capabilities: You can now utilize two new base inference models (Babbage-002 and Davinci-002) and fine-tuning capabilities for three models (Babbage-002, Davinci-002, and GPT-3.5-Turbo). We are adding fine-tuning capabilities to these three models and making them accessible through the Azure Machine Learning model catalog.