일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
- programmers
- 크루스칼 알고리즘
- dbms
- Prim's Algorithm
- mst
- 브라우저
- 벡엔드
- jsp
- SERVLET
- 부스트코스
- DP
- 웹프로그래밍
- 프로그래머스
- 웹 프로그래밍
- 백준
- 그리디
- 정렬 알고리즘
- 순열 알고리즘
- 프림 알고리즘
- greedy
- BJ
- 네이버 부스트캠프 ai tech
- 웹서버
- 해시
- mysql
- 다이나믹 프로그래밍
- request
- Kruskal's Algorithm
- 정렬
- 소수
- Today
- Total
목록Naver boostcamp -ai tech/Paper review (4)
끵뀐꿩긘의 여러가지

Contents 논문 Conditional Generative Adversarial Nets(2014,Mehdi Mirza) https://arxiv.org/abs/1411.1784 Conditional Generative Adversarial Nets Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to conditi..

Contents 논문 Generative Adversarial Nets(2014, Ian J. Goodfellow) https://arxiv.org/abs/1406.2661 Generative Adversarial Networks We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that arxiv.org 0. ..
Fully Convolutional Networks for Semantic Segmentation 논문을 직접 번역해 보았습니다. 문맥이 자연스럽지 않거나 잘못 번역된 부분이 있을 수도 있습니다. 읽으시다가 이상하거나 잘못된 부분들을 말씀해주시면 감사하겠습니다. * 부분은 논문에 설명되지 않았지만 논문을 읽으면서 궁금해지거나 필요해져서 제가 추가한 사항들입니다. 논문 페이지가 끝날때 +(페이지수)로 표시를 해두었습니다 Fully Convolutional Networks for Semantic Segmentation Jonathan Long∗ Evan Shelhamer∗ UCBerkeley Trevor Darrell {jonlong,shelhamer,trevor}@cs.berkeley.edu Abstrac..

논문 Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift(2015,Sergey Ioffe) https://arxiv.org/abs/1502.03167 Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previ..