As Google’s batch sizes for AI training continue to skyrocket, with some batch sizes ranging from over 100k to one million, the company’s research arm is looking at ways to improve everything from efficiency, scalability, and even privacy for those whose data is used in large-scale training runs. This week Google Research published a number of pieces around new problems emerging at “mega-batch” training scale for some of its most-used models. One of the most noteworthy new items… Source link
Read More »