CaptionsMaker
.com
Google BERT Architecture Explained 1/3 - (BERT, Seq2Seq, Encoder Decoder)
Edit Subtitles
Download Subtitles
SRT
TXT
Title:
Description:
Google BERT (Bidirectional Encoder Representations from Transformers) Machine Learning model for NLP has been a breakthrough. In this video series I am going to explain the architecture and help reducing time to understand the complex architecture. Reference used in this part of Video: https://ai.google/research/teams/language/ https://rajpurkar.github.io/SQuAD-explorer/ https://google.github.io/seq2seq/ All References: https://arxiv.org/pdf/1706.03762.pdf https://github.com/huggingface/pytorch-pretrained-BERT http://mlexplained.com/2017/12/29/attention-is-all-you-need-explained/ https://towardsdatascience.com/deconstructing-bert-distilling-6-patterns-from-100-million-parameters-b49113672f77 https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270 https://ai.google/research/teams/language/ https://rajpurkar.github.io/SQuAD-explorer/ https://google.github.io/seq2seq/ https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html https://stats.stackexchange.com/questions/321054/what-are-residual-connections-in-rnns Thanks to training partner: TechieGlobus : http://www.techieglobus.com
YouTube url:
https://www.youtube.com/watch?v=hu8lEz9oZZ0&list=PLbO2Kg9LNmpqPijv8Hvix4iC6MtCoAwbf&index=2&t=220s
Created:
17. 12. 2019 12:38:10