Call for Papers
Machine learning has been tremendously successful in enabling ubiquitous smart applications that facilitate people’s everyday life. The training and inference of machine learning models have traditionally taken place in centralised cloud facilities. Running machine learning models on the cloud, however, is associated with high rental/operational-cost, latency introduced by networks that is often hardly predictable, as well as the potential risk of compromising user’s data privacy. Recent advancement in computational power at the mobile edge, including smart consumer devices such as mobile phones, tablets, and smart watches, have made it possible to execute machine learning models partially or entirely on device. A more ambitious endeavour, that has already proven feasible, is to train or partially train the model on devices. Distributed inference and training on a plural of geographically separated devices with diverse computation capacities and network qualities are challenging topics that require research effort and discussions, to push forward the advancement in these areas.
The 4th edition of DistributedML workshop at CoNEXT’2023 will serve as a forum for networking and AI researchers to discuss the challenging topics, share new ideas, and exchange experiences across the areas of networking and distributed AI, from both theoretical and experimental aspects. We warmly invite submission of original, previously unpublished papers addressing key issues in distributed machine learning, specifically in the areas included, but are not limited to:
- Distributed inference and offloading
- Efficient DNN inference frameworks
- Efficient training/inference for large generative foundation models
- DNN computation sharing in local networks
- DNN-based compression schemes
- Distributed and asynchronous training algorithms
- Channel optimisations for distributed ML
- Federated and collaborative Learning
- Fairness and biases in federated learning
- Security and privacy in distributed learning
- Interpretability in distributed/collaborative learning
- Training and deployment of hyperscale models
- Node heterogeneity and stragglers in distributed ML
- Novel ML applications in IoT, MEC, SDN or NFV scenarios
In this iteration, we specifically want to invite and encourage contributions in the field of training and deployment of large generative foundation models, both considered as big challenges to be tackled for tractable and sustainable deployments in the wild.
Submissions
Solicited submissions include both full technical workshop papers and white paper position papers. Maximum length of such submissions is 6 pages (excluding references) in 2-column 10pt ACM format. Please checkout the sample-sigconf.tex and set up the latex class as \documentclass[10pt,sigconf,letterpaper,anonymous,nonacm]{acmart}.
All the submissions should be **double-blind** and will be peer-reviewed. For anonymity purposes, you must remove any author names and other uniquely identifying features in your submitted paper.
All submissions must be uploaded to the workshop submission site available here: distributedml2023.hotcrp.com.
Any questions regarding submission issues should be directed to Stefanos Laskaridis (mail (at) stefanos.cc).
Important Dates
Paper Submission: | |
Notification of Acceptance: | |
Camera ready: | October 25 2023 AoE |
Official publication date: | December 5 2023 |
Workshop Event: | December 8 2023 |