When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Andrew Ng - Wikipedia

    en.wikipedia.org/wiki/Andrew_Ng

    Andrew Yan-Tak Ng (Chinese: 吳恩達; born April 18, 1976 [2]) is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). [3] Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu , building the company's Artificial Intelligence Group ...

  3. Google Brain - Wikipedia

    en.wikipedia.org/wiki/Google_Brain

    Google Brain was a deep learning artificial intelligence research team that served as the sole AI branch of Google before being incorporated under the newer umbrella of Google AI, a research division at Google dedicated to artificial intelligence.

  4. Coursera - Wikipedia

    en.wikipedia.org/wiki/Coursera

    Coursera Inc. (/ k ər ˈ s ɛ r ə /) is an American global massive open online course provider. It was founded in 2012 [2] [3] by Stanford University computer science professors Andrew Ng and Daphne Koller. [4] Coursera works with universities and other organizations to offer online courses, certifications, and degrees in a variety of subjects.

  5. Geoffrey Hinton - Wikipedia

    en.wikipedia.org/wiki/Geoffrey_Hinton

    Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, and Nobel Prize winner in Physics, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".

  6. Transformer (deep learning architecture) - Wikipedia

    en.wikipedia.org/wiki/Transformer_(deep_learning...

    The plain transformer architecture had difficulty converging. In the original paper [1] the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.

  7. Chinchilla (language model) - Wikipedia

    en.wikipedia.org/wiki/Chinchilla_(language_model)

    It is named "chinchilla" because it is a further development over a previous model family named Gopher.Both model families were trained in order to investigate the scaling laws of large language models.

  8. Daphne Koller - Wikipedia

    en.wikipedia.org/wiki/Daphne_Koller

    Daphne Koller (Hebrew: דפנה קולר; born August 27, 1968) is an Israeli-American computer scientist. She was a professor in the department of computer science at Stanford University [4] and a MacArthur Foundation fellowship recipient. [1]

  9. Reinforcement learning - Wikipedia

    en.wikipedia.org/wiki/Reinforcement_learning

    Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised ...