4.38 score from hupso.pl for:
petewarden.com



HTML Content


Title pete warden's blog

Length: 24, Words: 4
Description ever tried. ever failed. no matter. try again. fail again. fail better.

Length: 71, Words: 12
Keywords pusty
Robots
Charset UTF-8
Og Meta - Title exist
Og Meta - Description exist
Og Meta - Site name exist
Tytuł powinien zawierać pomiędzy 10 a 70 znaków (ze spacjami), a mniej niż 12 słów w długości.
Meta opis powinien zawierać pomiędzy 50 a 160 znaków (łącznie ze spacjami), a mniej niż 24 słów w długości.
Kodowanie znaków powinny być określone , UTF-8 jest chyba najlepszy zestaw znaków, aby przejść z powodu UTF-8 jest bardziej międzynarodowy kodowaniem.
Otwarte obiekty wykresu powinny być obecne w stronie internetowej (więcej informacji na temat protokołu OpenGraph: http://ogp.me/)

SEO Content

Words/Characters 7954
Text/HTML 43.78 %
Headings H1 14
H2 10
H3 1
H4 0
H5 0
H6 0
H1
pete warden's blog
tensorflow for mobile poets
what are gpus, anyway?
bossy girls, parser mcparseface, and why deep learning is not just another fad
how to quantize neural networks with tensorflow
how to break into machine learning
nano-computers are coming!
hiking montara mountain
post navigation
follow @petewarden on twitter
recent posts
recent comments
archives
footer menu
H2
ever tried. ever failed. no matter. try again. fail again. fail better.
why does quantization work?
why quantize?
why not train in lower precision directly?
how can you quantize your models?
how does the quantization process work?
what representation is used for quantized tensors?
how do we determine ranges?
how is the rounding done?
what’s next?
H3
pete warden's blog
H4
H5
H6
strong
imagenet_comp_graph_label_strings.txt
tensorflow_inception_graph.pb
become a designated machine learner
enter competitions
find a community
write documentation
don’ts
b
i
em imagenet_comp_graph_label_strings.txt
tensorflow_inception_graph.pb
become a designated machine learner
enter competitions
find a community
write documentation
don’ts
Bolds strong 7
b 0
i 0
em 7
Zawartość strony internetowej powinno zawierać więcej niż 250 słów, z stopa tekst / kod jest wyższy niż 20%.
Pozycji używać znaczników (h1, h2, h3, ...), aby określić temat sekcji lub ustępów na stronie, ale zwykle, użyj mniej niż 6 dla każdego tagu pozycje zachować swoją stronę zwięzły.
Styl używać silnych i kursywy znaczniki podkreślić swoje słowa kluczowe swojej stronie, ale nie nadużywać (mniej niż 16 silnych tagi i 16 znaczników kursywy)

Statystyki strony

twitter:title pusty
twitter:description pusty
google+ itemprop=name pusty
Pliki zewnętrzne 31
Pliki CSS 9
Pliki javascript 22
Plik należy zmniejszyć całkowite odwołanie plików (CSS + JavaScript) do 7-8 maksymalnie.

Linki wewnętrzne i zewnętrzne

Linki 223
Linki wewnętrzne 4
Linki zewnętrzne 219
Linki bez atrybutu Title 198
Linki z atrybutem NOFOLLOW 0
Linki - Użyj atrybutu tytuł dla każdego łącza. Nofollow link jest link, który nie pozwala wyszukiwarkom boty zrealizują są odnośniki no follow. Należy zwracać uwagę na ich użytkowania

Linki wewnętrzne

Linki zewnętrzne

pete warden's blog https://petewarden.com/
home https://petewarden.com/
about https://petewarden.com/about/
https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
tensorflow for mobile poets https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
september 27, 2016 https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
13 comments https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/#comments
tensorflow for poets https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#0
tensorflow for poets https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html
https://www.youtube.com/watch?v=_bkzppniydo https://www.youtube.com/watch?v=_bkzppniydohttps://www.youtube.com/watch?v=_bkzppniydo
tensorflow/contrib/makefile/tf_op_files.txt https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/makefile/tf_op_files.txt
brew http://brew.sh/index.html
https://github.com/tensorflow/tensorflow https://github.com/tensorflow/tensorflow
https://petewarden.com/2016/05/17/what-are-gpus-anyway/
what are gpus, anyway? https://petewarden.com/2016/05/17/what-are-gpus-anyway/
may 17, 2016 https://petewarden.com/2016/05/17/what-are-gpus-anyway/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
4 comments https://petewarden.com/2016/05/17/what-are-gpus-anyway/#comments
photo by mark, vicki, ellaura, and mason https://www.flickr.com/photos/brown_family_album/4607229186/in/photolist-828fhy-5b1wvj-8ytney-828fxo-b99uck-56yzsr-4fvvgx-p8cms-bgrwkm-4jtpmc-9aios-9qszhw-8257fi-9aikq-8ytmbb-9aior-8tn9aq-djv982-6evr7n-9aikl-9aiof-2bgnen-lyxs4-6v5p4g-4fvv7m-6uzyxm-4pj8mq-668zgu-4pj68y-4pe1zv-4pe762-4pjbr5-4pj6vy-4pe4ei-4pe2wi-9jvm6b-6uzyy2-4pe1kx-4pe3kh-4pj8kl-4ctqb1-4jyu2c-9aimv-9ainv-9smduw-7e1wsf-5c3rfn-3icv1d-8pb4q-4pj7dq
https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/
bossy girls, parser mcparseface, and why deep learning is not just another fad https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/
may 15, 2016 https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
1 comment https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/#comments
fuzzy logic https://en.wikipedia.org/wiki/fuzzy_logic
corba https://en.wikipedia.org/wiki/common_object_request_broker_architecture
semantic web https://en.wikipedia.org/wiki/semantic_web
tensorflow https://tensorflow.org/
tried to build approachable tutorials https://petewarden.com/2016/02/28/tensorflow-for-poets/
release parsey mcparseface http://googleresearch.blogspot.com/2016/05/announcing-syntaxnet-worlds-most.html
a great article on why bossy is so gendered https://linguisticpulse.com/2014/03/28/no-really-bossy-is-gendered/
download parser mcparseface https://github.com/tensorflow/models/tree/master/syntaxnet
https://petewarden.com/2016/05/03/how-to-quantize-neural-networks-with-tensorflow/
how to quantize neural networks with tensorflow https://petewarden.com/2016/05/03/how-to-quantize-neural-networks-with-tensorflow/
may 3, 2016 https://petewarden.com/2016/05/03/how-to-quantize-neural-networks-with-tensorflow/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
31 comments https://petewarden.com/2016/05/03/how-to-quantize-neural-networks-with-tensorflow/#comments
picture by jaebum joo https://www.flickr.com/photos/joojaebum/6843753972/in/photolist-bql2c9-dikrxp-9xlk8f-q5ew5m-abiseb-83fisy-qjtxkh-azbzo6-nh9iw6-daan4j-n8djjr-7oeswf-a8qjke-nn6vvc-flvxn8-cpebho-7oeskr-nvp3rd-tjlqs-7wofe-9j5gca-5kwjsg-8g5tpn-axvzta-9pjx85-qoiuty-5mutc2-9vdsf3-cqw4j9-3sxuph-81smc7-mwhpnt-svzdh-oyyrew-4brwyz-cypfus-q4tbsd-3sxunv-jkpvx2-4hykhk-b7zypg-9fa37c-n7vvmb-bjtykn-qydsou-81zglm-bdexsc-e1sqa8-csvhr-dnkxhf
tensorflow’s https://www.tensorflow.org/
embedded vision summit http://www.embedded-vision.com/summit
i’ve talked so much about why eight-bit is important here https://petewarden.com/2015/05/23/why-are-eight-bits-enough-for-deep-neural-networks/
song han’s code books http://arxiv.org/pdf/1510.00149.pdf
you can see the final formula in the code https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/quantization/kernels/quantization_utils.h#l32
gemmlowp https://github.com/google/gemmlowp
the kernels that implement quantized ops https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantization/kernels
https://petewarden.com/2016/04/18/how-to-break-into-machine-learning/
how to break into machine learning https://petewarden.com/2016/04/18/how-to-break-into-machine-learning/
april 18, 2016 https://petewarden.com/2016/04/18/how-to-break-into-machine-learning/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
8 comments https://petewarden.com/2016/04/18/how-to-break-into-machine-learning/#comments
photo by erich ferdinand https://www.flickr.com/photos/erix/84884194/in/photolist-8v47a-ht298-dlxdsh-ebpor-bjtmmn-rexbg6-mphims-5kz4hc-qnmvag-9qtwgu-r3swc7-mc5b8b-g6gb9k-krnge-b6r4pv-ojvy7m-4x3ua4-5kyrae-dlxe5q-qn4zsp-4n7upk-qpmtux-7dwfyq-9mshjb-7n3bhz-6w2pbu-5wq1ug-jge7he-g51x4e-q9yogd-jsdscq-7ksawf-rgph3y-28gg7v-d7uzcy-q53dwm-6iyysc-p7vbdx-48h8sm-4tk4sg-xmnqw-ozejbx-hm7m8f-zzjzj-btxgtx-h4sjdg-jzhbj-2ergdg-qs4wnv-9dcmkm
kaggle https://www.kaggle.com/
tensorflow for poets https://petewarden.com/2016/02/28/tensorflow-for-poets/
udacity deep learning https://www.udacity.com/course/deep-learning--ud730
https://petewarden.com/2016/04/17/nano-computers-are-coming/
nano-computers are coming! https://petewarden.com/2016/04/17/nano-computers-are-coming/
april 17, 2016 https://petewarden.com/2016/04/17/nano-computers-are-coming/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
leave a comment https://petewarden.com/2016/04/17/nano-computers-are-coming/#respond
- https://www.flickr.com/photos/jurvetson/65904598/in/photolist-6pm9b-ckfasf-jez2tr-8evhgw-7ioabz-cwzy3s-7xvf3n-c1akpa-c1akc1-7xveyv-6t74bo-6t2vbt-5gxyyd-2vy7kh-4hate-6t73sy-5hasvx-6t2uef-5gthgz-5hasdr-5hf9wy-6t75j1-dbm7gx-5gy1cw-8bsccb-ccklvj-ccl7iu-jd73ft-cwzybq-2t6gq1-jbq5xo-5demp9-a4mwvu-ckfbbs-92f5rw-4o5gec-5hasah-5hasnv-944lfa-4qj1d-5fxtrx-6t74hu-dbswu1-5gxzyl-6t72z3-aemtjt-6t2wd2-5gxy7y-5gtfzv-5gy1sd
photo by steve jurvetson https://www.flickr.com/photos/jurvetson/65904598/in/photolist-6pm9b-ckfasf-jez2tr-8evhgw-7ioabz-cwzy3s-7xvf3n-c1akpa-c1akc1-7xveyv-6t74bo-6t2vbt-5gxyyd-2vy7kh-4hate-6t73sy-5hasvx-6t2uef-5gthgz-5hasdr-5hf9wy-6t75j1-dbm7gx-5gy1cw-8bsccb-ccklvj-ccl7iu-jd73ft-cwzybq-2t6gq1-jbq5xo-5demp9-a4mwvu-ckfbbs-92f5rw-4o5gec-5hasah-5hasnv-944lfa-4qj1d-5fxtrx-6t74hu-dbswu1-5gxzyl-6t72z3-aemtjt-6t2wd2-5gxy7y-5gtfzv-5gy1sd
the starshot project http://spacenews.com/pete-worden-leading-100-million-interstellar-spacecraft-tech-effort/
pete worden https://en.wikipedia.org/wiki/pete_worden
semantic sensor https://petewarden.com/2015/10/03/semantic-sensors/
embedded vision summit http://www.embedded-vision.com/summit
tensorflow https://www.tensorflow.org/
summit http://www.embedded-vision.com/summit
https://petewarden.com/2016/03/21/hiking-montara-mountain/
hiking montara mountain https://petewarden.com/2016/03/21/hiking-montara-mountain/
march 21, 2016 https://petewarden.com/2016/03/21/hiking-montara-mountain/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
leave a comment https://petewarden.com/2016/03/21/hiking-montara-mountain/#respond
firewatch http://www.firewatchgame.com/
bay area hiker http://bahiker.com/
montara mountain trail http://bahiker.com/southbayhikes/montaramtn.html
closed to the public http://www.sfexaminer.com/supervisors-call-for-greater-recreational-access-to-watershed/
trail guide from ba hiker was excellent http://bahiker.com/southbayhikes/montaramtn.html
« older posts https://petewarden.com/page/2/
rss - posts https://petewarden.com/feed/
tensorflow for mobile poets https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
what are gpus, anyway? https://petewarden.com/2016/05/17/what-are-gpus-anyway/
bossy girls, parser mcparseface, and why deep learning is not just another fad https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/
how to quantize neural networks with tensorflow https://petewarden.com/2016/05/03/how-to-quantize-neural-networks-with-tensorflow/
how to break into machine learning https://petewarden.com/2016/04/18/how-to-break-into-machine-learning/
tensorflow for mobile poe… https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/comment-page-1/#comment-100941
- http://gravatar.com/fhunters
lauris http://gravatar.com/fhunters
tensorflow for poets https://petewarden.com/2016/02/28/tensorflow-for-poets/comment-page-1/#comment-100791
https://www.contradodigital.com/2016/11/09/celebrating-tensorflows-first-year/
celebrating tensorfl… https://www.contradodigital.com/2016/11/09/celebrating-tensorflows-first-year/
tensorflow for mobile poe… https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/comment-page-1/#comment-100757
- http://--
guillaume http://--
tensorflow for mobile poe… https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/comment-page-1/#comment-100750
tensorflow for mobile poe… https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/comment-page-1/#comment-100739
september 2016 https://petewarden.com/2016/09/
may 2016 https://petewarden.com/2016/05/
april 2016 https://petewarden.com/2016/04/
march 2016 https://petewarden.com/2016/03/
february 2016 https://petewarden.com/2016/02/
november 2015 https://petewarden.com/2015/11/
october 2015 https://petewarden.com/2015/10/
september 2015 https://petewarden.com/2015/09/
august 2015 https://petewarden.com/2015/08/
may 2015 https://petewarden.com/2015/05/
april 2015 https://petewarden.com/2015/04/
march 2015 https://petewarden.com/2015/03/
january 2015 https://petewarden.com/2015/01/
december 2014 https://petewarden.com/2014/12/
november 2014 https://petewarden.com/2014/11/
october 2014 https://petewarden.com/2014/10/
september 2014 https://petewarden.com/2014/09/
august 2014 https://petewarden.com/2014/08/
july 2014 https://petewarden.com/2014/07/
june 2014 https://petewarden.com/2014/06/
may 2014 https://petewarden.com/2014/05/
april 2014 https://petewarden.com/2014/04/
march 2014 https://petewarden.com/2014/03/
february 2014 https://petewarden.com/2014/02/
january 2014 https://petewarden.com/2014/01/
december 2013 https://petewarden.com/2013/12/
november 2013 https://petewarden.com/2013/11/
october 2013 https://petewarden.com/2013/10/
september 2013 https://petewarden.com/2013/09/
august 2013 https://petewarden.com/2013/08/
july 2013 https://petewarden.com/2013/07/
june 2013 https://petewarden.com/2013/06/
may 2013 https://petewarden.com/2013/05/
april 2013 https://petewarden.com/2013/04/
march 2013 https://petewarden.com/2013/03/
february 2013 https://petewarden.com/2013/02/
january 2013 https://petewarden.com/2013/01/
november 2012 https://petewarden.com/2012/11/
october 2012 https://petewarden.com/2012/10/
august 2012 https://petewarden.com/2012/08/
july 2012 https://petewarden.com/2012/07/
june 2012 https://petewarden.com/2012/06/
may 2012 https://petewarden.com/2012/05/
april 2012 https://petewarden.com/2012/04/
march 2012 https://petewarden.com/2012/03/
february 2012 https://petewarden.com/2012/02/
january 2012 https://petewarden.com/2012/01/
december 2011 https://petewarden.com/2011/12/
november 2011 https://petewarden.com/2011/11/
october 2011 https://petewarden.com/2011/10/
september 2011 https://petewarden.com/2011/09/
august 2011 https://petewarden.com/2011/08/
july 2011 https://petewarden.com/2011/07/
june 2011 https://petewarden.com/2011/06/
may 2011 https://petewarden.com/2011/05/
april 2011 https://petewarden.com/2011/04/
march 2011 https://petewarden.com/2011/03/
february 2011 https://petewarden.com/2011/02/
january 2011 https://petewarden.com/2011/01/
december 2010 https://petewarden.com/2010/12/
november 2010 https://petewarden.com/2010/11/
october 2010 https://petewarden.com/2010/10/
september 2010 https://petewarden.com/2010/09/
august 2010 https://petewarden.com/2010/08/
july 2010 https://petewarden.com/2010/07/
june 2010 https://petewarden.com/2010/06/
may 2010 https://petewarden.com/2010/05/
april 2010 https://petewarden.com/2010/04/
march 2010 https://petewarden.com/2010/03/
february 2010 https://petewarden.com/2010/02/
january 2010 https://petewarden.com/2010/01/
december 2009 https://petewarden.com/2009/12/
november 2009 https://petewarden.com/2009/11/
october 2009 https://petewarden.com/2009/10/
september 2009 https://petewarden.com/2009/09/
august 2009 https://petewarden.com/2009/08/
july 2009 https://petewarden.com/2009/07/
june 2009 https://petewarden.com/2009/06/
may 2009 https://petewarden.com/2009/05/
april 2009 https://petewarden.com/2009/04/
march 2009 https://petewarden.com/2009/03/
february 2009 https://petewarden.com/2009/02/
january 2009 https://petewarden.com/2009/01/
december 2008 https://petewarden.com/2008/12/
november 2008 https://petewarden.com/2008/11/
october 2008 https://petewarden.com/2008/10/
september 2008 https://petewarden.com/2008/09/
august 2008 https://petewarden.com/2008/08/
july 2008 https://petewarden.com/2008/07/
june 2008 https://petewarden.com/2008/06/
may 2008 https://petewarden.com/2008/05/
april 2008 https://petewarden.com/2008/04/
march 2008 https://petewarden.com/2008/03/
february 2008 https://petewarden.com/2008/02/
january 2008 https://petewarden.com/2008/01/
december 2007 https://petewarden.com/2007/12/
november 2007 https://petewarden.com/2007/11/
october 2007 https://petewarden.com/2007/10/
september 2007 https://petewarden.com/2007/09/
august 2007 https://petewarden.com/2007/08/
july 2007 https://petewarden.com/2007/07/
june 2007 https://petewarden.com/2007/06/
may 2007 https://petewarden.com/2007/05/
april 2007 https://petewarden.com/2007/04/
march 2007 https://petewarden.com/2007/03/
december 2006 https://petewarden.com/2006/12/
november 2006 https://petewarden.com/2006/11/
october 2006 https://petewarden.com/2006/10/
september 2006 https://petewarden.com/2006/09/
august 2006 https://petewarden.com/2006/08/
pete warden's blog https://petewarden.com/
home https://petewarden.com/
about https://petewarden.com/about/
blog at wordpress.com. https://wordpress.com/?ref=footer_blog
pete warden's blog https://petewarden.com/
blog at wordpress.com. https://wordpress.com/?ref=footer_blog

Zdjęcia

Zdjęcia 22
Zdjęcia bez atrybutu ALT 6
Zdjęcia bez atrybutu TITLE 22
Korzystanie Obraz ALT i TITLE atrybutu dla każdego obrazu.

Zdjęcia bez atrybutu TITLE

https://petewarden.files.wordpress.com/2016/09/screen-shot-2016-09-27-at-8-56-06-am.png?w=550
https://petewarden.files.wordpress.com/2016/09/screen-shot-2016-09-26-at-12-39-14-pm.png?w=550
https://petewarden.files.wordpress.com/2016/05/screen-shot-2016-05-16-at-5-54-04-pm.png?w=550
https://petewarden.files.wordpress.com/2016/05/asawb.png?w=550
https://petewarden.files.wordpress.com/2016/05/screen-shot-2016-05-02-at-9-59-55-pm.png?w=550
https://petewarden.files.wordpress.com/2016/05/quantization0.png?w=550
https://petewarden.files.wordpress.com/2016/05/quantization1.png?w=550
https://petewarden.files.wordpress.com/2016/05/quantization2.png?w=550
https://petewarden.files.wordpress.com/2016/04/broken_glass.png?w=550
https://petewarden.files.wordpress.com/2016/04/ccd.png?w=550
https://petewarden.files.wordpress.com/2016/03/img_2940.jpg?w=550
https://petewarden.files.wordpress.com/2016/03/img_2941.jpg?w=550
https://petewarden.files.wordpress.com/2016/03/img_2947.jpg?w=550
https://petewarden.files.wordpress.com/2016/03/img_2943.jpg?w=550
https://petewarden.files.wordpress.com/2016/03/img_2942.jpg?w=550
https://petewarden.files.wordpress.com/2016/03/img_2945.jpg?w=550
https://1.gravatar.com/avatar/d145c0e90eb7751c2cce992bd055e95c?s=48&d=identicon&r=g
https://2.gravatar.com/avatar/ee8bf52d4cec1620ed6ef0f0ee16421a?s=48&d=identicon&r=g
https://1.gravatar.com/avatar/1d14b708a9332fec0a069975f13852a3?s=48&d=identicon&r=g
https://0.gravatar.com/avatar/f40a9f0a6f0f7698a615c232f1bc2278?s=48&d=identicon&r=g
https://sb.scorecardresearch.com/p?c1=2&c2=7518284&c3=&c4=&c5=&c6=&c15=&cv=2.0&cj=1
https://pixel.wp.com/b.gif?v=noscript

Zdjęcia bez atrybutu ALT

https://1.gravatar.com/avatar/d145c0e90eb7751c2cce992bd055e95c?s=48&d=identicon&r=g
https://2.gravatar.com/avatar/ee8bf52d4cec1620ed6ef0f0ee16421a?s=48&d=identicon&r=g
https://1.gravatar.com/avatar/1d14b708a9332fec0a069975f13852a3?s=48&d=identicon&r=g
https://0.gravatar.com/avatar/f40a9f0a6f0f7698a615c232f1bc2278?s=48&d=identicon&r=g
https://sb.scorecardresearch.com/p?c1=2&c2=7518284&c3=&c4=&c5=&c6=&c15=&cv=2.0&cj=1
https://pixel.wp.com/b.gif?v=noscript

Ranking:


Alexa Traffic
Daily Global Rank Trend
Daily Reach (Percent)









Majestic SEO











Text on page:

search pete warden's blog ever tried. ever failed. no matter. try again. fail again. fail better. menu skip to content homeabout tensorflow for mobile poets september 27, 2016 by pete warden in uncategorized 13 comments in tensorflow for poets, i showed how you could train a neural network to recognize objects using your own custom images. the next step is getting that model into users’ hands, so in this tutorial i’ll show you what you need to do to run it in your own ios application. i’m assuming you’ve already completed tensorflow for poets, and so you should have docker installed and a tf_files folder in your home directory that contains a retrained_graph.pb file containing your model. if you don’t, you’ll need to work through that example to build your own network. you’ll find the screencast to accompany this tutorial above, or at https://www.youtube.com/watch?v=_bkzppniydo, which should help clarify the steps i’ll be walking you through. as a first step, open the docker quickstart terminal and start a new docker container using the latest docker image. this tutorial relies on some newer features of tensorflow, so the v0.8 image used for the original tf for poets won’t work. docker run -it -p 8888:8888 -v $home/tf_files:/tf_files \ tensorflow/tensorflow:nightly-devel you should find yourself in a new shell where the prompt begins with root@ and ends with a ‘#’, indicating you’re running inside the docker image. to make sure things are setup correctly, run this `ls -lah /tf_files/` and make sure that the retrained_graph.pb file appears. next, we’re going to make sure that the model is producing sane results at the start. here i’m using the default flower images to test, but if you have trained on custom categories substitute the image file with one of your own. the compilation process may take a few minutes too, so make sure that you have updated the virtualbox settings to take advantage of your machine’s memory and processors if things are running too slowly. cd /tensorflow/ bazel build tensorflow/examples/label_image:label_image bazel-bin/tensorflow/examples/label_image/label_image \ --output_layer=final_result \ --labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ --graph=/tf_files/retrained_graph.pb this should hopefully produce a sensible top label for your example, in the case of flowers with daisy at the top. we’ll be using this command to make sure we’re still getting sensible results as we do further processing on the model file to prepare it for use in a mobile app. mobile devices have limited amounts of memory, and apps need to be downloaded, so by default the ios version of tensorflow only includes support for operations that are common in inference and don’t have large external dependencies. you can see the list of supported ops in the tensorflow/contrib/makefile/tf_op_files.txt file. one of the operations that isn’t supported is decodejpeg, because the current implementation relies on libjpeg which is painful to support on ios and would increase the binary footprint. while we could write a new implementation that uses ios’s native image libraries, for most mobile applications we don’t need to decode jpegs because we’re dealing directly with camera image buffers. unfortunately the inception model we based our retraining on includes a decodejpeg operation. we normally bypass this by directly feeding the mul node that occurs after the decode, but on platforms that don’t support the operation you’ll see an error when the graph is loaded, even if the op is never called. to avoid this, the optimize_for_inference script removes all nodes that aren’t needed for a given set of input and output nodes. the script also does a few other optimizations that help speed, such as merging explicit batch normalization ops into the convolutional weights to reduce the number of calculations. here’s how you run it: bazel build tensorflow/python/tools:optimize_for_inference bazel-bin/tensorflow/python/tools/optimize_for_inference \ --input=/tf_files/retrained_graph.pb \ --output=/tf_files/optimized_graph.pb \ --input_names=mul \ --output_names=final_result this creates a new file at /tf_files/optimized_graph.pb. to check that it hasn’t altered the output of the network, run label_image again on the updated model: bazel-bin/tensorflow/examples/label_image/label_image \ --output_layer=final_result \ --labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ --graph=/tf_files/optimized_graph.pb you should see very similar results to the first time you ran label_image, since the underlying mathematical results should be preserved through the changes made to streamline it. the retrained model is still 87mb in size at this point, and that guarantees a large download size for any app that includes it. there are lots of ways to reduce download sizes, and i’ll cover those in more detail in other documentation, but there’s one very simple approach that’s a big help without adding much complexity. because apple distributes apps in .ipa packages, all of the assets are compressed using zip. usually models don’t compress well because the weights are all slightly different floating point values. you can achieve much better compression just by rounding all the weights within a particular constant to 256 levels though, while still leaving them in floating-point format. this gives a lot more repetition for the compression algorithm to take advantage of, but doesn’t require any new operators and only reduces the precision by a small amount (typically less than a 1% drop in precision). here’s how you call the quantize_graph script to apply these changes: bazel build tensorflow/tools/quantization:quantize_graph bazel-bin/tensorflow/tools/quantization/quantize_graph \ --input=/tf_files/optimized_graph.pb \ --output=/tf_files/rounded_graph.pb \ --output_node_names=final_result \ --mode=weights_rounded if you look on disk, the raw size of the rounded_graph.pb file is the same at 87mb, but if you right-click on it in the finder and choose “compress”, you should see it results in a file that’s only about 24mb or so. that reflects what size increase you’d actually see in a compressed .ipa on ios, or an .apk on android. to verify that the model is still working, run label_image again: bazel-bin/tensorflow/examples/label_image/label_image \ --output_layer=final_result \ --labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ --graph=/tf_files/rounded_graph.pb this time, i would expect that the results may have slightly more noticeable changes thanks to the effects of the quantization, but the overall size and order of the labels should still be the same. the final processing step we need to run is memory mapping. because the buffers holding the model weight values are 87mb in size, the memory needed to load these into the app can put a lot of pressure on ram in ios even before the model is run. this can lead to stability problems as the os can unpredictably kill apps that use too much memory. fortunately these buffers are read-only, so it’s possible to map them into memory in a way that the os can easily discard them behind the scenes when there’s memory pressure, avoiding the possibility of those crashes. to support this, we need to rearrange the model so that the weights are held in sections that can be easily loaded separately from the main graphdef, though they’re all still in one file. here is the command to do that: bazel build tensorflow/contrib/util:convert_graphdef_memmapped_format bazel-bin/tensorflow/contrib/util/convert_graphdef_memmapped_format \ --in_graph=/tf_files/rounded_graph.pb \ --out_graph=/tf_files/mmapped_graph.pb one thing to watch out for is that the file on disk is no longer a plain graphdef protobuf, so if you try loading it into a program like label_image that expects one, you’ll see errors. you need to load the model file slightly differently, which we’ll show in the ios example below. so far we’ve been running all these scripts in docker, since for demonstration purposes it’s a lot easier to run scripts there, because installing the python dependencies is a lot more straightforward on ubuntu than os x. now we’re going to switch to a native terminal so that we can compile an ios app that uses the model you’ve trained. you’ll need xcode 7.3 or later with the command line tools installed to build the app, which you can download from apple. you’ll also need brew, and automake to run the build script. to install it using brew, run this command: brew install automake once you have those, open up a new terminal window, download the tensorflow source (using `git clone https://github.com/tensorflow/tensorflow`) to a folder on your machine (replacing `~/projects/tensorflow` below with that location) and run the following commands to build the framework and copy your model files over: cd ~/projects/tensorflow tensorflow/contrib/makefile/build_all_ios.sh cp ~/tf_files/mmapped_graph.pb \ tensorflow/contrib/ios_examples/camera/data/ cp ~/tf_files/retrained_labels.txt \ tensorflow/contrib/ios_examples/camera/data/ open tensorflow/contrib/ios_examples/camera/camera_example.xcodeproj check the terminal to make sure that your compilation succeeded without errors, and then you should find the camera example project opened in xcode. this app shows a live feed of your camera, together with the labels for any objects it has recognized, so it’s a good demo project for testing out a new model. the terminal commands above should have copied the model files you need into the apps data folder, but you still need to let xcode know that it should include them in the app. to remove the default model files, go to the left-hand project navigator pane in xcode, select imagenet_comp_graph_label_strings.txt and tensorflow_inception_graph.pb in the data folder, and delete them, choosing “move to trash” when prompted. next, open a finder window containing the new model files, for example from the terminal like this: open tensorflow/contrib/ios_examples/camera/data drag `mmapped_graph.pb` and `retrained_labels.txt` from that finder window, into the data folder in the project navigator. make sure the “add to targets” is enabled for cameraexample in the dialog’s checkbox. this should let xcode know that it should include the files when you build the app, so if you see later errors about missing files, double-check these steps. we’ve got the files in the app, but we also need to update some other information. we need to update the name of the files to load, but also some other metadata about the size of the input images, the node names, and how to scale the pixel values numerically before feeding them in. to make those changes open cameraexampleviewcontroller.mm in xcode, and look for the model settings near the top of the file. replace them with the following block: // if you have your own model, modify this to the file name, and make sure // you've added the file to your app resources too. static nsstring* model_file_name = @"mmapped_graph"; static nsstring* model_file_type = @"pb"; // this controls whether we'll be loading a plain graphdef proto, or a // file created by the convert_graphdef_memmapped_format utility that wraps a // graphdef and parameter file that can be mapped into memory from file to // reduce overall memory usage. const bool model_uses_memory_mapping = true; // if you have your own model, point this to the labels file. static nsstring* labels_file_name = @"retrained_labels"; static nsstring* labels_file_type = @"txt"; // these dimensions need to match those the model was trained with. const int wanted_input_width = 299; const int wanted_input_height = 299; const int wanted_input_channels = 3; const float input_mean = 128.0f; const float input_std = 128.0f; const std::string input_layer_name = "mul"; const std::string output_layer_name = "final_result"; finally, plug in and select your ios device (this won’t run on the simulator because it needs a camera) and hit command+r to build and run the modified example. if everything has worked, you should see the app start, display the live camera feed, and begin showing labels from your training categories. to test it out, find an example of the kind of objects you’re trying to recognize, point the camera at it and see if it is able to give it the right label. if you don’t have any physical objects handy, try doing an image search on the web, and then point it at your computer display. congratulations, you’ve managed to train your own model and run it on a phone! as next steps, a lot of the same transformations can be used on android or for the raspberry pi, and for all sorts of other models available in tensorflow for everything from natural language processing to speech synthesis. i’m excited to see new apps emerge using the incredible capabilities of deep learning on device, so i can’t wait to see what you come up with! what are gpus, anyway? may 17, 2016 by pete warden in uncategorized 4 comments photo by mark, vicki, ellaura, and mason a good friend of mine just asked me “what are gpus?”. it came up because she’s a great digital artist who’s getting into vr, and the general advice she gets is “buy a pc with a video card that costs more than $350”. what makes that one component cost so much, why do we need them, and what do they do? to help answer that, i thought i’d try to give an overview aimed at non-engineers. graphics processing units were created to draw images, text, and geometry onto the screen. this means they’re designed very differently than the cpus that run applications. cpus need to be good at following very complex recipes of instructions so they can deal with all sorts of user inputs and switch between tasks rapidly. gpus are much more specialized. they only need to do a limited range of things, but each job they’re given can involve touching millions of memory locations in one go. to see the difference between the kind of programs that run on cpus and gpus, think about a cpu reading from a text box. the cpu will sit waiting for you to press a key, and as soon as you do it might need to look in a list to figure out if there’s an autocomplete entry, check the spelling, or move to the next box if you hit return. this is a complex set of instructions with a lot of decisions involved. by contrast, a typical gpu task would be drawing an image on-screen. a picture that’s 1,000 pixels wide and high has a million elements, and drawing it means moving all of those into the screen buffer. that’s a lot more work than just waiting for a key press, but it also involves a lot fewer decisions since you just need to move a large number of pixels from one place to another. the differences in the kinds of tasks that cpus and gpus need to do means that they’re designed in very different ways. cpus are very flexible and able to do a lot of complicated tasks involving decision-making. gpus are less adaptable but can operate on large numbers of elements at once, so they can perform many operations much faster. the way gpus achieve this flexibility is that they break their tasks into much smaller components that can be shared across a large set of many small processors running at once. because the jobs they’re being asked to do are simpler than cpus, it’s easy to automatically split them up like this. as an example you can imagine having hundreds of little processors, each of which is given a tile of an image to draw. by having them work in parallel, the whole picture can be drawn much faster. the key advantage of gpus is this scalability. they can’t do every job, but for the ones they can tackle, you essentially can just pack in more processors on the board to get faster performance. this is why video cards that are capable of handling the high resolutions and framerates you need for vr are more expensive, they have more (and individually faster) processors to handle those larger sizes as you go up in price. this scalability is harder to do on cpus because it’s much trickier to break up the logic needed to run applications into smaller jobs. this is a painfully simplified explanation i know, but i’m hoping to get across what makes gpus fundamentally different from cpus. if you have a task that involves a lot of computation but few decision points, then gpus are set up to parallelize that job automatically. this is clearest in graphics, but also comes up as part of deep learning, where there are similar heavy-lifting requirements across millions of artificial neurons. as moore’s law continues to fade, leaving cpu speeds to languish, these sort of parallel approaches will become more and more attractive. bossy girls, parser mcparseface, and why deep learning is not just another fad may 15, 2016 by pete warden in uncategorized 1 comment when i talk to people outside of google and the subject turns to neural networks i often encounter a lot of skepticism. anybody who’s been alive over the past two decades has seen a lot of technological fads appear in an explosion of hype and fade away without making much of a lasting impact. remember fuzzy logic, corba, or the semantic web? deep learning is different, and i believe this fervently because is that i’ve seen the approach deliver record-beating results in practical applications across an amazing variety of different problems. that’s why tensorflow is so important to me personally, because it’s a great platform to share some very down-to-earth tools that demonstrate convincingly how powerful the technique can be. that’s a big reason i’ve tried to build approachable tutorials for common needs like image recognition, so everyone has a chance to see it working for themselves. it’s also why i was over the moon to see another google research team release parsey mcparseface! this is a state of the art sentence parser that’s built using tensorflow. that might sound a bit esoteric, but parsing is one of the fundamental problems that computers need to tackle to understand written language. with this available, i’m starting to dream up all sorts of interesting applications i wouldn’t have been able to think about before. for instance i’d love to know what verbs and adjectives are most commonly applied to men and women in all sorts of different contexts. to illustrate my point, here’s a paragraph from a great article on why bossy is so gendered: finally, the most flexible approach is one that is much more labor intensive. it involves gathering a random sample of instances of bossy and then simply reading through all of them with our own eyes to determine who is being labelled bossy. this is the approach i took in my recent blog post. because of the amount of time involved, i looked at far fewer examples than any of the approaches i’ve discussed, but i also was able to classify instances that the above approaches would have missed. the graph below illustrates what i found, namely that bossy was applied to women and girls three times more frequently than it was to men and boys. … you might think to yourself, “but there’s only 101 examples! that’s so few!” this kind of attribution of an adjective to a subject is something an accurate parser can do automatically. rather than laboriously going through just a hundred examples, it’s easy to set up the parser mcparseface and run through millions of sentences. the parser isn’t perfect, but at 94% accuracy on one metric, it’s pretty close to humans who get 96%. even better, having the computer do the heavy lifting means that it’s possible to explore many other relationships in the data, to uncover all sorts of unknown statistical relationships in the language we use. there’s bound to be other words that are skewed in similar or opposite ways to ‘bossy’, and i’d love to know what they are! that’s just one example though. the reason i’m so excited about deep learning is that i can’t even imagine all the applications that have now become possible. download parser mcparseface yourself and give it a try on a problem you care about, i’d love to see what you come up with! how to quantize neural networks with tensorflow may 3, 2016 by pete warden in uncategorized 31 comments picture by jaebum joo i’m pleased to say that we’ve been able to release a first version of tensorflow’s quantized eight bit support. i was pushing hard to get it in before the embedded vision summit, because it’s especially important for low-power and mobile devices, so it’s exciting to get it out there. all this documentation will be appearing on the main tensorflow site also, but since i’ve talked so much about why eight-bit is important here, i wanted to give an overview of what we’ve released in this post too. when modern neural networks were being developed, the biggest challenge was getting them to work at all! that meant that accuracy and speed during training were the top priorities. using floating point arithmetic was the easiest way to preserve accuracy, and gpus were well-equipped to accelerate those calculations, so it’s natural that not much attention was paid to other numerical formats. these days, we actually have a lot of models being being deployed in commercial applications. the computation demands of training grow with the number of researchers, but the cycles needed for inference expand in proportion to users. that means pure inference efficiency has become a burning issue for a lot of teams. that is where quantization comes in. it’s an umbrella term that covers a lot of different techniques to store numbers and perform calculations on them in more compact formats than 32-bit floating point. i am going to focus on eight-bit fixed point, for reasons i’ll go into more detail on later. why does quantization work? training neural networks is done by applying many tiny nudges to the weights, and these small increments typically need floating point precision to work (though there are research efforts to use quantized representations here too). taking a pre-trained model and running inference is very different. one of the magical qualities of deep networks is that they tend to cope very well with high levels of noise in their inputs. if you think about recognizing an object in a photo you’ve just taken, the network has to ignore all the ccd noise, lighting changes, and other non-essential differences between it and the training examples it’s seen before, and focus on the important similarities instead. this ability means that they seem to treat low-precision calculations as just another source of noise, and still produce accurate results even with numerical formats that hold less information. why quantize? neural network models can take up a lot of space on disk, with the original alexnet being over 200 mb in float format for example. almost all of that size is taken up with the weights for the neural connections, since there are often many millions of these in a single model. because they’re all slightly different floating point numbers, simple compression formats like zip don’t compress them well. they are arranged in large layers though, and within each layer the weights tend to be normally distributed within a certain range, for example -3.0 to 6.0. the simplest motivation for quantization is to shrink file sizes by storing the min and max for each layer, and then compressing each float value to an eight-bit integer representing the closest real number in a linear set of 256 within the range. for example with the -3.0 to 6.0 range, a 0 byte would represent -3.0, a 255 would stand for 6.0, and 128 would represent about 1.5. i’ll go into the exact calculations later, since there’s some subtleties, but this means you can get the benefit of a file on disk that’s shrunk by 75%, and then convert back to float after loading so that your existing floating-point code can work without any changes. another reason to quantize is to reduce the computational resources you need to do the inference calculations, by running them entirely with eight-bit inputs and outputs. this is a lot more difficult since it requires changes everywhere you do calculations, but offers a lot of potential rewards. fetching eight-bit values only requires 25% of the memory bandwidth of floats, so you’ll make much better use of caches and avoid bottlenecking on ram access. you can also typically use simd operations that do many more operations per clock cycle. in some case you’ll have a dsp chip available that can accelerate eight-bit calculations too, which can offer a lot of advantages. moving calculations over to eight bit will help you run your models faster, and use less power (which is especially important on mobile devices). it also opens the door to a lot of embedded systems that can’t run floating point code efficiently, so it can enable a lot of applications in the iot world. why not train in lower precision directly? there have been some experiments training at lower bit depths, but the results seem to indicate that you need higher than eight bit to handle the back propagation and gradients. that makes implementing the training more complicated, and so starting with inference made sense. we also already have a lot of float models already that we use and know well, so being able to convert them directly is very convenient. how can you quantize your models? tensorflow has production-grade support for eight-bit calculations built it. it also has a process for converting many models trained in floating-point over to equivalent graphs using quantized calculations for inference. for example, here’s how you can translate the latest googlenet model into a version that uses eight-bit computations: curl http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -o /tmp/inceptionv3.tgz tar xzf /tmp/inceptionv3.tgz -c /tmp/ bazel build tensorflow/contrib/quantization/tools:quantize_graph bazel-bin/tensorflow/contrib/quantization/tools/quantize_graph \ --input=/tmp/classify_image_graph_def.pb \ --output_node_names="softmax" --output=/tmp/quantized_graph.pb \ --mode=eightbit this will produce a new model that runs the same operations as the original, but with eight bit calculations internally, and all weights quantized as well. if you look at the file size, you’ll see it’s about a quarter of the original (23mb versus 91mb). you can still run this model using exactly the same inputs and outputs though, and you should get equivalent results. here’s an example: bazel build tensorflow/examples/label_image:label_image bazel-bin/tensorflow/examples/label_image/label_image \ --input_graph=/tmp/quantized_graph.pb \ --input_width=299 \ --input_height=299 \ --mean_value=128 \ --std_value=128 \ --input_layer_name="mul:0" \ --output_layer_name="softmax:0" you’ll see that this runs the newly-quantized graph, and outputs a very similar answer to the original. you can run the same process on your own models saved out as graphdefs, with the input and output names adapted to those your network requires. i recommend that you run them through the freeze_graph script first, to convert checkpoints into constants stored in the file. how does the quantization process work? we’ve implemented quantization by writing equivalent eight-bit versions of operations that are commonly used during inference. these include convolution, matrix multiplication, activation functions, pooling operations and concatenation. the conversion script first replaces all the individual ops it knows about with quantized equivalents. these are small sub-graphs that have conversion functions before and after to move the data between float and eight-bit. below is an example of what they look like. first here’s the original relu operation, with float inputs and outputs: then, this is the equivalent converted subgraph, still with float inputs and outputs, but with internal conversions so the calculations are done in eight bit. the min and max operations actually look at the values in the input float tensor, and then feeds them into the dequantize operation that converts the tensor into eight-bits. there’s more details on how the quantized representation works later on. once the individual operations have been converted, the next stage is to remove unnecessary conversions to and from float. if there are consecutive sequences of operations that all have float equivalents, then there will be a lot of adjacent dequantize/quantize ops. this stage spots that pattern, recognizes that they cancel each other out, and removes them, like this: applied on a large scale to models where all of the operations have quantized equivalents, this gives a graph where all of the tensor calculations are done in eight bit, without having to convert to float. what representation is used for quantized tensors? we approach converting floating-point arrays of numbers into eight-bit representations as a compression problem. we know that the weights and activation tensors in trained neural network models tend to have values that are distributed across comparatively small ranges (for example you might have -15 to +15 for weights, -500 to 1000 for activations on an image model, though the exact numbers will vary). we also know from experiment that neural nets tend to be very robust in the face of noise, and so the noise-like error produced by quantizing down to a small set of values will not hurt the precision of the overall results very much. we also want to pick a representation that’s easy to perform calculations on, especially the large matrix multiplications that form the bulk of the work that’s needed to run a model. these led us to pick a representation that has two floats to store the overall minimum and maximum values that are represented by the lowest and highest quantized value. each entry in the quantized array represents a float value in that range, distributed linearly between the minimum and maximum. for example, if we have minimum = -10.0, and maximum = 30.0f, and an eight-bit array, here’s what the quantized values represent: quantized | float ----------+----- 0 | -10.0 255 | 30.0 128 | 10.0 the advantages of this format are that it can represent arbitrary magnitudes of ranges, they don’t have to be symmetrical, it can represent signed and unsigned values, and the linear spread makes doing multiplications straightforward. there are alternatives like song han’s code books that can use lower bit depths by non-linearly distributing the float values across the representation, but these tend to be more expensive to calculate on. the advantage of having a strong and clear definition of the quantized format is that it’s always possible to convert back and forth from float for operations that aren’t quantization-ready, or to inspect the tensors for debugging purposes. one implementation detail in tensorflow that we’re hoping to improve in the future is that the minimum and maximum float values need to be passed as separate tensors to the one holding the quantized values, so graphs can get a bit dense! how do we determine ranges? the nice thing about the minimum and maximum ranges is that they can often be pre-calculated. weight parameters are constants known at load time, so their ranges can also be stored as constants. we often know the ranges for inputs (for examples images are usually rgb values in the range 0.0 to 255.0), and many activation functions have known ranges too. this can avoid having to analyze the outputs of an operation to determine the range, which we need to do for math ops like convolution or matrix multiplication which produce 32-bit accumulated results from 8-bit inputs. if you’re doing any kind of arithmetic on 8-bit inputs, you’ll naturally start to accumulate results that have more than 8 bits of precision. if you add two 8 bit values, the result needs 9 bits. if you multiply two 8 bit numbers, you get 16 bits in the output. if you total up a series of 8-bit multiplications, like we do for matrix multiplication, the results grow beyond 16 bits, with the accumulator typically needing at least 20 to 25 bits, depending on how long the dot products involved are. this can be an issue for our quantization approach, since we need to take an output that’s much wider than 8 bits and shrink it down to feed into the next operation. one way to do it for matrix multiplies would be to calculate the largest and smallest possible output values, assuming all of the input values were at extremes. this is safe, since we know mathematically that no results can fall outside this range, but in practice most weights and activation values are much more evenly distributed. this means that the actual range of values we see is much smaller than the theoretical one, so if we used the larger bounds we’d be wasting a lot of our 8 bits on numbers that never appeared. instead, we use the quantizedownandshrinkrange operator to take a 32-bit accumulated tensor, analyze it to understand the actual ranges used, and rescale so that the 8-bit output tensor uses that range effectively. there are strategies that involve observing the actual minimums and maximums encountered with large sets of training data, and hard-coding those to avoid analyzing the buffer for ranges every time, but we don’t currently include that optimization. how is the rounding done? one of the hardest and most subtle problems we hit during quantization was the accumulation of biases. as i mentioned above, neural networks are very resilient to noise, but unless you’re very careful with rounding it’s easy to introduce biases in a single direction that build up during computation and wreck the final accuracy. you can see the final formula in the code, but the important part was that we needed to subtract the rounded version of the minimum from the rounded version of the float input value, rather than subtracting float minimum from the input and then rounding. what’s next? we’ve found that we can get extremely good performance on mobile and embedded devices by using eight-bit arithmetic rather than floating-point. you can see the framework we use to optimize matrix multiplications at gemmlowp. we still need to apply all the lessons we’ve learned to the tensorflow ops to get maximum performance on mobile, but we’re actively working on that. right now, this quantized implementation is a reasonably fast and accurate reference implementation that we’re hoping will enable wider support for our eight-bit models on a wider variety of devices. if you’re interested, i highly recommend digging through the quantization code in tensorflow, especially looking at the kernels that implement quantized ops. these all include reference implementations that we’re hoping will help portability to new hardware devices. we also hope that this demonstration will encourage the community to explore what’s possible with low-precision neural networks. thanks to everyone who helped put the quantization support together, it’s been great getting this out there! how to break into machine learning april 18, 2016 by pete warden in uncategorized 8 comments photo by erich ferdinand an engineer recently asked me how she could turn an interest in machine learning into a full-time job. this can be a daunting prospect, because the whole field has until recently been very separate from traditional engineering, with only a few specialists at large companies using it in production, often far from traditional product teams. i took a very random path to focusing on deep learning full time, but so did most of the people i work with. it’s not clear that there is one good route, but i wanted to share the advice i had to offer in case it’s helpful to others. become a designated machine learner every manager should point at one member of their team and say “you are now our machine learning expert”. if your manager doesn’t do that for you, announce it yourself to anyone who will listen. this may sound like madness, but machine learning is rapidly invading almost every product area, so whether you’re in games or enterprise software, your group needs to at least stay up to date with what’s happening with the technology. if you aren’t, then your competitors are! you may have to fight your own imposter syndrome, but becoming the go-to person for everyone’s questions about machine learning is a fantastic way to teach yourself the essentials. you’ll have to say “good question, let me go figure that out” a lot at first, but every expert i know does the same! even if you don’t end up building anything in production, at least you’ll be able to point at relevant research and experiments if you decide to change to a new position. enter competitions i have been a massive fan of kaggle since it got off the ground. if your job’s not offering you the opportunities in machine learning you want, then joining that community is a great way to teach yourself a lot of practical skills. if you look through the forums, a lot of the contestants will describe exactly how they solved old competitions, so i would recommend following a few of their recipes to get started. once you’re able to do that, pick a new contest that’s similar to one of those, and start playing around with all of the different options to see how you can improve the results. most of machine learning is the software equivalent of banging on the side of the tv set until it works, so don’t be discouraged if you have trouble seeing an underlying theory behind all your tweaking! find a community as i mentioned above, the most frustrating thing about machine learning is how arbitrary it all is. i’m lucky enough to be at a large company surrounded by people i can talk to about things like why my model isn’t learning, but most engineers don’t have that luxury. that’s another advantage of kaggle, from what i’ve seen their forums offer a lot of support and encouragement. i would also look out for real-world meetups where you can swap stories and commiserate. if you can’t find something related to your field, try starting a mailing list or group yourself, or propose a session at a conference. there is a long tradition of mentorship in machine learning, especially around deep learning, but i think we should be doing a lot better job of capturing all that oral tradition. as someone who was recently an outside myself, i want to see the field democratized. i think the reliance on word-of-mouth is more about poor written communication than anything inherent in the subject. write documentation on that topic, my tensorflow for poets post came out of work i was doing to help myself understand how to reliably retrain the top layer of a deep network. i didn’t know how before i started, but by carefully documenting the process and making sure i could reproduce it consistently, i learned a lot about how it all works. i also got a lot of helpful feedback as i shared drafts of the guide with colleagues. one interesting thing about human nature is that people are a lot more willing to correct somebody else’s mistaken ideas than they are to propose their own. as long as you’re happy to keep eating humble pie, that means writing up your own tentative understanding and getting it reviewed is a lot more effective way of getting others to share their knowledge than asking flat out! that’s another reason i try to do documentation, purely for the corrections. don’ts unless you’re doing a degree at a recognized university, i personally don’t recommend going for a credential in machine learning. i do love courses like the udacity deep learning program, but for the content not as a resumé builder. having practical experience, even just on competitions like kaggle, will be a lot more helpful in interviews. as an engineer, i also find many machine learning research papers hard to get much benefit from. they tend to assume a lot of prior knowledge from the academic world, and prefer presenting their ideas in math rather than code. they can be useful once you’re experienced, but don’t worry if you’re left baffled by them at first. anyway, i hope some of these ideas are useful. definitely read them with a skeptical eye, nobody really knows anything in this field, and i’ll be interested to hear what other suggestions people have! nano-computers are coming! april 17, 2016 by pete warden in uncategorized leave a comment photo by steve jurvetson a few days ago i got an email from a journalist asking about the starshot project. of course he was looking for my much-more famous namesake pete worden, but i’ve been fascinated by the effort too. its whole foundation is that we’ll soon be able to miniaturize space probes down to a few grams and have them function on tiny amounts of power. over the past few years i’ve come to realize that’s the future of computing. imagine having a self-contained system that costs a few cents, is only a couple of millimeters wide, with a self-contained battery, processor, and basic ccd image sensor. using modern deep learning techniques, you could train it to recognize crop pests or diseases on leaves and then scatter a few thousand across a field. or sprinkle them through a jungle to help spot endangered wildlife. they could be spread over our bridges to spot corrosion before it gets started, or for any of the semantic sensor uses i’ve talked about before. i know how useful these systems will be once they exist, but there are some major engineering challenges to solve before we get there. that’s why i’m excited to be going to the embedded vision summit in a couple of weeks. jeff bier has gathered together a fantastic group of developers and industry leaders who are working on making this future happen. we’ll also have a strong presence from the tensorflow team, to show how important embedded devices are to us. jeff dean will be keynoting and i’ll be discussing the nitty-gritty of using the framework on tiny devices. if you’re intrigued by the idea of these “nano-computers”, and want to find out more (or even better if you’re already working on them like several folks i know!) i highly recommend joining me at the summit in santa clara, may 2nd to 4th. hiking montara mountain march 21, 2016 by pete warden in uncategorized leave a comment i finished off firewatch yesterday, and it made me nostalgic for the days when i’d hike almost every weekend. i realized that part of it was because i don’t know enough of the local trails in san francisco, so i decided to explore the wonderful bay area hiker site for nearby hikes that would get me into the wilderness without taking up the whole day. i ended up choosing the montara mountain trail, and i’m very glad i did! it’s just outside of underrated pacifica (which i’m always surprised isn’t the malibu of san francisco) and i was especially excited to get a closer look at the vast peninsula watershed area that’s currently closed to the public. the trail guide from ba hiker was excellent, despite dating from 2003. there was lots of room to park at the trailhead, possibly due to the $6 fee, and very clear signage for the trail that included distances. after the rains we’ve had this winter, the wildflowers were starting to blossom. it was great seeing my old friend from los angeles, ceanothus (or wild lilac) with a full set of blossoms too. the wet weather made life very pleasant for a banana slug i encountered slithering across the trail as well. the trailbed was in great condition, there were obviously some good crews taking care of the swales and drainages so it was all very hikeable despite el ninó. a bridge on the brooks falls trail that forms part of the return loop was washed out though, so i made it an out and back. it was a seven mile trip with 1,600 feet of elevation gain, with most of the outward part a steady uphill slog with one or two steeper sections. the views from the higher sections make the effort worthwhile though. i caught a glimpse of the watershed where the trail finished, blocked by a gate and fence, but by then it was starting to rain a little so i headed back down quickly. it was a great hike, taking a little under three hours despite how little i’ve hiked recently, and the trailhead’s only thirty minutes from central san francisco, so it’s convenient enough that i hope i’ll be able to fit it in even on busy weekends. despite being so close to the city, once i got past the first mile it felt very wild, so i got a refreshing taste of nature as well. i’m looking forward to many more trips, and maybe a few more explorations of other nearby hikes on ba hiker, since this one was so much fun! post navigation « older posts follow @petewarden on twittermy tweetsrss - posts recent posts tensorflow for mobile poets what are gpus, anyway? bossy girls, parser mcparseface, and why deep learning is not just another fad how to quantize neural networks with tensorflow how to break into machine learning recent comments daniel on tensorflow for mobile poe… lauris on tensorflow for poets celebrating tensorfl… on tensorflow for mobile poe… guillaume on tensorflow for mobile poe… marco on tensorflow for mobile poe… archives september 2016 may 2016 april 2016 march 2016 february 2016 november 2015 october 2015 september 2015 august 2015 may 2015 april 2015 march 2015 january 2015 december 2014 november 2014 october 2014 september 2014 august 2014 july 2014 june 2014 may 2014 april 2014 march 2014 february 2014 january 2014 december 2013 november 2013 october 2013 september 2013 august 2013 july 2013 june 2013 may 2013 april 2013 march 2013 february 2013 january 2013 november 2012 october 2012 august 2012 july 2012 june 2012 may 2012 april 2012 march 2012 february 2012 january 2012 december 2011 november 2011 october 2011 september 2011 august 2011 july 2011 june 2011 may 2011 april 2011 march 2011 february 2011 january 2011 december 2010 november 2010 october 2010 september 2010 august 2010 july 2010 june 2010 may 2010 april 2010 march 2010 february 2010 january 2010 december 2009 november 2009 october 2009 september 2009 august 2009 july 2009 june 2009 may 2009 april 2009 march 2009 february 2009 january 2009 december 2008 november 2008 october 2008 september 2008 august 2008 july 2008 june 2008 may 2008 april 2008 march 2008 february 2008 january 2008 december 2007 november 2007 october 2007 september 2007 august 2007 july 2007 june 2007 may 2007 april 2007 march 2007 december 2006 november 2006 october 2006 september 2006 august 2006 pete warden's blog footer menu homeabout blog at wordpress.com. ↑ pete warden's blog blog at wordpress.com. post to cancel


Here you find all texts from your page as Google (googlebot) and others search engines seen it.

Words density analysis:

Numbers of all words: 7575

One word

Two words phrases

Three words phrases

the - 6.96% (527)
and - 2.28% (173)
that - 2.01% (152)
you - 1.99% (151)
for - 1.68% (127)
all - 1.24% (94)
out - 1.02% (77)
are - 0.94% (71)
put - 0.88% (67)
tensor - 0.84% (64)
this - 0.83% (63)
int - 0.82% (62)
app - 0.79% (60)
file - 0.78% (59)
with - 0.78% (59)
can - 0.75% (57)
our - 0.75% (57)
tensorflow - 0.71% (54)
but - 0.7% (53)
graph - 0.69% (52)
men - 0.62% (47)
model - 0.62% (47)
here - 0.61% (46)
use - 0.61% (46)
bit - 0.58% (44)
per - 0.57% (43)
200 - 0.54% (41)
one - 0.54% (41)
work - 0.53% (40)
need - 0.53% (40)
image - 0.53% (40)
your - 0.51% (39)
have - 0.5% (38)
eight - 0.5% (38)
lot - 0.49% (37)
example - 0.46% (35)
label - 0.46% (35)
run - 0.45% (34)
rain - 0.45% (34)
very - 0.45% (34)
float - 0.44% (33)
quantize - 0.44% (33)
how - 0.42% (32)
train - 0.41% (31)
ran - 0.41% (31)
more - 0.41% (31)
files - 0.41% (31)
see - 0.4% (30)
input - 0.4% (30)
from - 0.38% (29)
get - 0.38% (29)
was - 0.37% (28)
own - 0.37% (28)
they - 0.37% (28)
any - 0.37% (28)
them - 0.36% (27)
min - 0.36% (27)
art - 0.34% (26)
there - 0.34% (26)
now - 0.34% (26)
form - 0.33% (25)
value - 0.33% (25)
into - 0.32% (24)
output - 0.32% (24)
about - 0.3% (23)
end - 0.3% (23)
it’s - 0.3% (23)
learning - 0.3% (23)
result - 0.3% (23)
know - 0.29% (22)
range - 0.29% (22)
what - 0.29% (22)
than - 0.29% (22)
tf_files - 0.29% (22)
other - 0.29% (22)
able - 0.28% (21)
point - 0.28% (21)
two - 0.28% (21)
much - 0.28% (21)
also - 0.28% (21)
mul - 0.26% (20)
quantized - 0.26% (20)
too - 0.26% (20)
tar - 0.25% (19)
operation - 0.25% (19)
make - 0.25% (19)
may - 0.25% (19)
values - 0.25% (19)
not - 0.25% (19)
that’s - 0.25% (19)
these - 0.24% (18)
over - 0.22% (17)
ever - 0.22% (17)
trained - 0.22% (17)
build - 0.22% (17)
code - 0.22% (17)
should - 0.22% (17)
using - 0.22% (17)
will - 0.22% (17)
because - 0.22% (17)
name - 0.21% (16)
like - 0.21% (16)
load - 0.21% (16)
results - 0.21% (16)
press - 0.21% (16)
label_image - 0.21% (16)
way - 0.21% (16)
network - 0.21% (16)
don’t - 0.21% (16)
thing - 0.2% (15)
came - 0.2% (15)
process - 0.2% (15)
machine - 0.2% (15)
labels - 0.2% (15)
examples - 0.2% (15)
quantization - 0.2% (15)
new - 0.2% (15)
represent - 0.2% (15)
eight-bit - 0.2% (15)
calculations - 0.2% (15)
set - 0.18% (14)
start - 0.18% (14)
here’s - 0.18% (14)
some - 0.18% (14)
large - 0.18% (14)
ios - 0.18% (14)
on, - 0.18% (14)
then - 0.18% (14)
format - 0.18% (14)
bazel - 0.18% (14)
few - 0.18% (14)
weight - 0.18% (14)
most - 0.17% (13)
gpu - 0.17% (13)
convert - 0.17% (13)
look - 0.17% (13)
camera - 0.17% (13)
operations - 0.17% (13)
retrain - 0.17% (13)
you’ll - 0.17% (13)
has - 0.17% (13)
models - 0.17% (13)
why - 0.17% (13)
mobile - 0.17% (13)
down - 0.17% (13)
you’re - 0.17% (13)
just - 0.17% (13)
different - 0.16% (12)
2016 - 0.16% (12)
2009 - 0.16% (12)
would - 0.16% (12)
2014 - 0.16% (12)
pete - 0.16% (12)
2013 - 0.16% (12)
who - 0.16% (12)
gpus - 0.16% (12)
april - 0.16% (12)
const - 0.16% (12)
help - 0.16% (12)
2008 - 0.16% (12)
try - 0.16% (12)
size - 0.16% (12)
i’m - 0.16% (12)
on. - 0.16% (12)
only - 0.16% (12)
weights - 0.16% (12)
deep - 0.16% (12)
2011 - 0.16% (12)
ram - 0.16% (12)
2010 - 0.16% (12)
neural - 0.16% (12)
every - 0.16% (12)
memory - 0.16% (12)
final - 0.15% (11)
even - 0.15% (11)
layer - 0.15% (11)
map - 0.15% (11)
number - 0.15% (11)
works - 0.15% (11)
old - 0.15% (11)
max - 0.15% (11)
since - 0.15% (11)
retrained - 0.15% (11)
those - 0.15% (11)
call - 0.15% (11)
floating - 0.15% (11)
read - 0.15% (11)
warden - 0.15% (11)
september - 0.15% (11)
march - 0.15% (11)
find - 0.15% (11)
term - 0.15% (11)
still - 0.15% (11)
cpu - 0.15% (11)
sure - 0.15% (11)
inference - 0.15% (11)
support - 0.15% (11)
led - 0.13% (10)
2007 - 0.13% (10)
2012 - 0.13% (10)
november - 0.13% (10)
device - 0.13% (10)
though - 0.13% (10)
been - 0.13% (10)
august - 0.13% (10)
many - 0.13% (10)
through - 0.13% (10)
version - 0.13% (10)
data - 0.13% (10)
trail - 0.13% (10)
october - 0.13% (10)
she - 0.13% (10)
small - 0.13% (10)
its - 0.13% (10)
approach - 0.13% (10)
before - 0.13% (10)
compress - 0.13% (10)
/tensorflow/ - 0.13% (10)
which - 0.13% (10)
hike - 0.12% (9)
2015 - 0.12% (9)
include - 0.12% (9)
first - 0.12% (9)
their - 0.12% (9)
lower - 0.12% (9)
implement - 0.12% (9)
want - 0.12% (9)
post - 0.12% (9)
inputs - 0.12% (9)
means - 0.12% (9)
los - 0.12% (9)
ranges - 0.12% (9)
training - 0.12% (9)
give - 0.12% (9)
each - 0.12% (9)
come - 0.12% (9)
i’ve - 0.12% (9)
take - 0.12% (9)
february - 0.11% (8)
comment - 0.11% (8)
we’re - 0.11% (8)
once - 0.11% (8)
equivalent - 0.11% (8)
rounded - 0.11% (8)
precision - 0.11% (8)
well - 0.11% (8)
tools - 0.11% (8)
optimize - 0.11% (8)
we’ve - 0.11% (8)
where - 0.11% (8)
under - 0.11% (8)
another - 0.11% (8)
graphdef - 0.11% (8)
networks - 0.11% (8)
open - 0.11% (8)
having - 0.11% (8)
move - 0.11% (8)
change - 0.11% (8)
great - 0.11% (8)
script - 0.11% (8)
minimum - 0.11% (8)
bits - 0.11% (8)
july - 0.11% (8)
cpus - 0.11% (8)
applications - 0.11% (8)
june - 0.11% (8)
involve - 0.11% (8)
december - 0.11% (8)
january - 0.11% (8)
time - 0.11% (8)
job - 0.11% (8)
high - 0.11% (8)
next - 0.11% (8)
being - 0.11% (8)
across - 0.11% (8)
devices - 0.11% (8)
i’ll - 0.11% (8)
mapped - 0.11% (8)
project - 0.09% (7)
photo - 0.09% (7)
multiplication - 0.09% (7)
maximum - 0.09% (7)
representation - 0.09% (7)
recent - 0.09% (7)
were - 0.09% (7)
feed - 0.09% (7)
names - 0.09% (7)
off - 0.09% (7)
xcode - 0.09% (7)
good - 0.09% (7)
numbers - 0.09% (7)
near - 0.09% (7)
bossy - 0.09% (7)
parser - 0.09% (7)
important - 0.09% (7)
test - 0.09% (7)
let - 0.09% (7)
search - 0.09% (7)
download - 0.09% (7)
docker - 0.09% (7)
there’s - 0.09% (7)
changes - 0.09% (7)
poets - 0.09% (7)
less - 0.09% (7)
ops - 0.09% (7)
command - 0.09% (7)
when - 0.09% (7)
san - 0.09% (7)
uncategorized - 0.09% (7)
yourself - 0.09% (7)
same - 0.09% (7)
getting - 0.09% (7)
advantage - 0.09% (7)
actual - 0.08% (6)
0.0 - 0.08% (6)
hard - 0.08% (6)
does - 0.08% (6)
going - 0.08% (6)
especially - 0.08% (6)
recognize - 0.08% (6)
better - 0.08% (6)
could - 0.08% (6)
reason - 0.08% (6)
tend - 0.08% (6)
128 - 0.08% (6)
older - 0.08% (6)
line - 0.08% (6)
needed - 0.08% (6)
flower - 0.08% (6)
doing - 0.08% (6)
problem - 0.08% (6)
outputs - 0.08% (6)
node - 0.08% (6)
/tmp/ - 0.08% (6)
face - 0.08% (6)
without - 0.08% (6)
noise - 0.08% (6)
uses - 0.08% (6)
check - 0.08% (6)
think - 0.08% (6)
list - 0.08% (6)
sort - 0.08% (6)
part - 0.08% (6)
fast - 0.08% (6)
blog - 0.08% (6)
used - 0.08% (6)
raw - 0.08% (6)
original - 0.08% (6)
top - 0.08% (6)
terminal - 0.08% (6)
produce - 0.08% (6)
full - 0.08% (6)
show - 0.08% (6)
possible - 0.08% (6)
perform - 0.08% (6)
engineer - 0.08% (6)
product - 0.08% (6)
running - 0.08% (6)
sit - 0.08% (6)
got - 0.08% (6)
back - 0.08% (6)
task - 0.08% (6)
they’re - 0.08% (6)
step - 0.08% (6)
matrix - 0.08% (6)
similar - 0.08% (6)
implementation - 0.08% (6)
side - 0.08% (6)
buffer - 0.07% (5)
stand - 0.07% (5)
offer - 0.07% (5)
starting - 0.07% (5)
kind - 0.07% (5)
it. - 0.07% (5)
can’t - 0.07% (5)
sorts - 0.07% (5)
made - 0.07% (5)
computer - 0.07% (5)
inception - 0.07% (5)
date - 0.07% (5)
object - 0.07% (5)
wild - 0.07% (5)
computation - 0.07% (5)
fad - 0.07% (5)
faster - 0.07% (5)
people - 0.07% (5)
apps - 0.07% (5)
often - 0.07% (5)
interest - 0.07% (5)
million - 0.07% (5)
wide - 0.07% (5)
field - 0.07% (5)
file. - 0.07% (5)
typical - 0.07% (5)
between - 0.07% (5)
avoid - 0.07% (5)
processors - 0.07% (5)
range, - 0.07% (5)
share - 0.07% (5)
ability - 0.07% (5)
draw - 0.07% (5)
folder - 0.07% (5)
i’d - 0.07% (5)
real - 0.07% (5)
error - 0.07% (5)
working - 0.07% (5)
research - 0.07% (5)
team - 0.07% (5)
too. - 0.07% (5)
reduce - 0.07% (5)
embedded - 0.07% (5)
quantize_graph - 0.07% (5)
install - 0.07% (5)
recommend - 0.07% (5)
mcparseface - 0.07% (5)
comments - 0.07% (5)
images - 0.07% (5)
demo - 0.07% (5)
above - 0.07% (5)
activation - 0.07% (5)
wanted - 0.07% (5)
floating-point - 0.07% (5)
close - 0.07% (5)
require - 0.07% (5)
ways - 0.07% (5)
2006 - 0.07% (5)
later - 0.07% (5)
follow - 0.07% (5)
decode - 0.05% (4)
might - 0.05% (4)
things - 0.05% (4)
disk - 0.05% (4)
model. - 0.05% (4)
decision - 0.05% (4)
making - 0.05% (4)
documentation - 0.05% (4)
constant - 0.05% (4)
millions - 0.05% (4)
overall - 0.05% (4)
tasks - 0.05% (4)
directly - 0.05% (4)
daisy - 0.05% (4)
within - 0.05% (4)
rounding - 0.05% (4)
bazel-bin/tensorflow/examples/label_image/label_image - 0.05% (4)
distributed - 0.05% (4)
compression - 0.05% (4)
tradition - 0.05% (4)
despite - 0.05% (4)
store - 0.05% (4)
learning, - 0.05% (4)
values, - 0.05% (4)
retrained_graph.pb - 0.05% (4)
hoping - 0.05% (4)
little - 0.05% (4)
well. - 0.05% (4)
recently - 0.05% (4)
talk - 0.05% (4)
clear - 0.05% (4)
easy - 0.05% (4)
outside - 0.05% (4)
problems - 0.05% (4)
hope - 0.05% (4)
amount - 0.05% (4)
break - 0.05% (4)
time, - 0.05% (4)
common - 0.05% (4)
during - 0.05% (4)
turn - 0.05% (4)
whole - 0.05% (4)
key - 0.05% (4)
processing - 0.05% (4)
screen - 0.05% (4)
conversion - 0.05% (4)
formats - 0.05% (4)
slightly - 0.05% (4)
function - 0.05% (4)
seen - 0.05% (4)
rounded_graph.pb - 0.05% (4)
though, - 0.05% (4)
appear - 0.05% (4)
box - 0.05% (4)
isn’t - 0.05% (4)
multiplications - 0.05% (4)
home - 0.05% (4)
exact - 0.05% (4)
detail - 0.05% (4)
excited - 0.05% (4)
love - 0.05% (4)
typically - 0.05% (4)
8-bit - 0.05% (4)
code, - 0.05% (4)
accuracy - 0.05% (4)
poe… - 0.05% (4)
rather - 0.05% (4)
become - 0.05% (4)
objects - 0.05% (4)
already - 0.05% (4)
taking - 0.05% (4)
needs - 0.05% (4)
again - 0.05% (4)
you’ve - 0.05% (4)
add - 0.05% (4)
update - 0.05% (4)
tutorial - 0.05% (4)
done - 0.05% (4)
in. - 0.05% (4)
long - 0.05% (4)
power - 0.05% (4)
nsstring* - 0.05% (4)
static - 0.05% (4)
understand - 0.05% (4)
remove - 0.05% (4)
we’ll - 0.05% (4)
after - 0.05% (4)
following - 0.05% (4)
idea - 0.05% (4)
signed - 0.05% (4)
makes - 0.05% (4)
math - 0.05% (4)
source - 0.05% (4)
below - 0.05% (4)
live - 0.05% (4)
noise, - 0.05% (4)
care - 0.05% (4)
simple - 0.05% (4)
tensors - 0.05% (4)
linear - 0.05% (4)
mine - 0.05% (4)
calculate - 0.04% (3)
determine - 0.04% (3)
posts - 0.04% (3)
array - 0.04% (3)
accurate - 0.04% (3)
involved - 0.04% (3)
forward - 0.04% (3)
everyone - 0.04% (3)
least - 0.04% (3)
default - 0.04% (3)
calculations, - 0.04% (3)
technique - 0.04% (3)
girls - 0.04% (3)
numerical - 0.04% (3)
example, - 0.04% (3)
say - 0.04% (3)
case - 0.04% (3)
google - 0.04% (3)
separate - 0.04% (3)
wider - 0.04% (3)
32-bit - 0.04% (3)
performance - 0.04% (3)
site - 0.04% (3)
focus - 0.04% (3)
explore - 0.04% (3)
10.0 - 0.04% (3)
future - 0.04% (3)
found - 0.04% (3)
instance - 0.04% (3)
convolution - 0.04% (3)
--labels=/tf_files/retrained_labels.txt - 0.04% (3)
--output_layer=final_result - 0.04% (3)
accumulate - 0.04% (3)
subject - 0.04% (3)
applied - 0.04% (3)
fit - 0.04% (3)
past - 0.04% (3)
practical - 0.04% (3)
--image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg - 0.04% (3)
known - 0.04% (3)
speed - 0.04% (3)
encounter - 0.04% (3)
what’s - 0.04% (3)
arithmetic - 0.04% (3)
computers - 0.04% (3)
tiny - 0.04% (3)
release - 0.04% (3)
actually - 0.04% (3)
approaches - 0.04% (3)
point, - 0.04% (3)
together - 0.04% (3)
graphs - 0.04% (3)
ideas - 0.04% (3)
cover - 0.04% (3)
files, - 0.04% (3)
correct - 0.04% (3)
them, - 0.04% (3)
pick - 0.04% (3)
window - 0.04% (3)
tensorflow/contrib/ios_examples/camera/data - 0.04% (3)
errors - 0.04% (3)
enable - 0.04% (3)
framework - 0.04% (3)
scale - 0.04% (3)
pixel - 0.04% (3)
87mb - 0.04% (3)
model, - 0.04% (3)
convert_graphdef_memmapped_format - 0.04% (3)
requires - 0.04% (3)
enough - 0.04% (3)
is. - 0.04% (3)
hit - 0.04% (3)
example. - 0.04% (3)
parallel - 0.04% (3)
big - 0.04% (3)
experiment - 0.04% (3)
right - 0.04% (3)
loaded - 0.04% (3)
warden's - 0.04% (3)
system - 0.04% (3)
finder - 0.04% (3)
buffers - 0.04% (3)
spot - 0.04% (3)
apply - 0.04% (3)
functions - 0.04% (3)
effort - 0.04% (3)
individual - 0.04% (3)
days - 0.04% (3)
sections - 0.04% (3)
leave - 0.04% (3)
brew - 0.04% (3)
watch - 0.04% (3)
summit - 0.04% (3)
loading - 0.04% (3)
program - 0.04% (3)
versions - 0.04% (3)
far - 0.04% (3)
useful - 0.04% (3)
python - 0.04% (3)
constants - 0.04% (3)
app, - 0.04% (3)
fail - 0.04% (3)
given - 0.04% (3)
out, - 0.04% (3)
available - 0.04% (3)
steps - 0.04% (3)
native - 0.04% (3)
natural - 0.04% (3)
while - 0.04% (3)
difference - 0.04% (3)
did - 0.04% (3)
text - 0.04% (3)
current - 0.04% (3)
above, - 0.04% (3)
picture - 0.04% (3)
involves - 0.04% (3)
place - 0.04% (3)
community - 0.04% (3)
helpful - 0.04% (3)
includes - 0.04% (3)
smaller - 0.04% (3)
taken - 0.04% (3)
almost - 0.04% (3)
automatically - 0.04% (3)
imagine - 0.04% (3)
looking - 0.04% (3)
devices. - 0.04% (3)
sizes - 0.04% (3)
logic - 0.04% (3)
hold - 0.04% (3)
user - 0.04% (3)
work. - 0.04% (3)
complex - 0.04% (3)
gets - 0.04% (3)
fan - 0.04% (3)
area - 0.04% (3)
competitions - 0.04% (3)
optimize_for_inference - 0.04% (3)
asked - 0.04% (3)
wait - 0.04% (3)
-3.0 - 0.04% (3)
aren’t - 0.04% (3)
hiker - 0.04% (3)
255 - 0.04% (3)
kaggle - 0.04% (3)
gpus, - 0.04% (3)
6.0 - 0.04% (3)
language - 0.04% (3)
card - 0.04% (3)
anything - 0.04% (3)
cost - 0.04% (3)
person - 0.04% (3)
group - 0.04% (3)
shrink - 0.04% (3)
linearly - 0.03% (2)
advantages - 0.03% (2)
entry - 0.03% (2)
-10.0 - 0.03% (2)
stage - 0.03% (2)
30.0 - 0.03% (2)
float. - 0.03% (2)
arbitrary - 0.03% (2)
equivalents, - 0.03% (2)
floats - 0.03% (2)
ops. - 0.03% (2)
cancel - 0.03% (2)
(for - 0.03% (2)
encourage - 0.03% (2)
spread - 0.03% (2)
myself - 0.03% (2)
eating - 0.03% (2)
montara - 0.03% (2)
mountain - 0.03% (2)
nature - 0.03% (2)
human - 0.03% (2)
guide - 0.03% (2)
dequantize - 0.03% (2)
started, - 0.03% (2)
propose - 0.03% (2)
(or - 0.03% (2)
field, - 0.03% (2)
forums - 0.03% (2)
kaggle, - 0.03% (2)
engineers - 0.03% (2)
finished - 0.03% (2)
company - 0.03% (2)
seeing - 0.03% (2)
software - 0.03% (2)
around - 0.03% (2)
effective - 0.03% (2)
others - 0.03% (2)
joining - 0.03% (2)
nano-computers - 0.03% (2)
couple - 0.03% (2)
self-contained - 0.03% (2)
realize - 0.03% (2)
grams - 0.03% (2)
sensor - 0.03% (2)
before. - 0.03% (2)
course - 0.03% (2)
engineering - 0.03% (2)
solve - 0.03% (2)
interested - 0.03% (2)
knowledge - 0.03% (2)
left - 0.03% (2)
code. - 0.03% (2)
presenting - 0.03% (2)
prior - 0.03% (2)
jeff - 0.03% (2)
personally - 0.03% (2)
recognized - 0.03% (2)
us. - 0.03% (2)
asking - 0.03% (2)
contest - 0.03% (2)
francisco, - 0.03% (2)
depths - 0.03% (2)
encountered - 0.03% (2)
subtract - 0.03% (2)
biases - 0.03% (2)
careful - 0.03% (2)
unless - 0.03% (2)
mentioned - 0.03% (2)
convenient - 0.03% (2)
subtle - 0.03% (2)
currently - 0.03% (2)
sets - 0.03% (2)
operator - 0.03% (2)
learned - 0.03% (2)
fall - 0.03% (2)
bits, - 0.03% (2)
bits. - 0.03% (2)
accumulated - 0.03% (2)
analyze - 0.03% (2)
improve - 0.03% (2)
always - 0.03% (2)
strong - 0.03% (2)
expensive - 0.03% (2)
views - 0.03% (2)
now, - 0.03% (2)
enter - 0.03% (2)
manager - 0.03% (2)
nearby - 0.03% (2)
decide - 0.03% (2)
expert - 0.03% (2)
teach - 0.03% (2)
fantastic - 0.03% (2)
hikes - 0.03% (2)
watershed - 0.03% (2)
rapidly - 0.03% (2)
member - 0.03% (2)
had - 0.03% (2)
reference - 0.03% (2)
life - 0.03% (2)
production, - 0.03% (2)
traditional - 0.03% (2)
until - 0.03% (2)
forms - 0.03% (2)
return - 0.03% (2)
mile - 0.03% (2)
trip - 0.03% (2)
highly - 0.03% (2)
works. - 0.03% (2)
waiting - 0.03% (2)
tensor, - 0.03% (2)
scripts - 0.03% (2)
~/projects/tensorflow - 0.03% (2)
commands - 0.03% (2)
window, - 0.03% (2)
those, - 0.03% (2)
automake - 0.03% (2)
brew, - 0.03% (2)
switch - 0.03% (2)
straightforward - 0.03% (2)
dependencies - 0.03% (2)
purposes - 0.03% (2)
demonstration - 0.03% (2)
one, - 0.03% (2)
folder, - 0.03% (2)
plain - 0.03% (2)
main - 0.03% (2)
behind - 0.03% (2)
easily - 0.03% (2)
fortunately - 0.03% (2)
kill - 0.03% (2)
lead - 0.03% (2)
pressure - 0.03% (2)
size, - 0.03% (2)
holding - 0.03% (2)
thanks - 0.03% (2)
expect - 0.03% (2)
tensorflow/contrib/ios_examples/camera/data/ - 0.03% (2)
navigator - 0.03% (2)
doesn’t - 0.03% (2)
128.0f; - 0.03% (2)
friend - 0.03% (2)
17, - 0.03% (2)
anyway? - 0.03% (2)
with! - 0.03% (2)
android - 0.03% (2)
begin - 0.03% (2)
display - 0.03% (2)
everything - 0.03% (2)
finally, - 0.03% (2)
output_layer_name - 0.03% (2)
input_layer_name - 0.03% (2)
std::string - 0.03% (2)
299; - 0.03% (2)
xcode, - 0.03% (2)
with. - 0.03% (2)
parameter - 0.03% (2)
created - 0.03% (2)
whether - 0.03% (2)
resources - 0.03% (2)
replace - 0.03% (2)
images, - 0.03% (2)
information. - 0.03% (2)
cameraexample - 0.03% (2)
this: - 0.03% (2)
choosing - 0.03% (2)
select - 0.03% (2)
disk, - 0.03% (2)
gives - 0.03% (2)
advice - 0.03% (2)
tensorflow, - 0.03% (2)
settings - 0.03% (2)
updated - 0.03% (2)
too, - 0.03% (2)
minutes - 0.03% (2)
compilation - 0.03% (2)
own. - 0.03% (2)
categories - 0.03% (2)
next, - 0.03% (2)
ends - 0.03% (2)
prompt - 0.03% (2)
won’t - 0.03% (2)
relies - 0.03% (2)
sensible - 0.03% (2)
image. - 0.03% (2)
latest - 0.03% (2)
network. - 0.03% (2)
containing - 0.03% (2)
installed - 0.03% (2)
assuming - 0.03% (2)
custom - 0.03% (2)
poets, - 0.03% (2)
homeabout - 0.03% (2)
content - 0.03% (2)
menu - 0.03% (2)
again. - 0.03% (2)
tensorflow/examples/label_image:label_image - 0.03% (2)
flowers - 0.03% (2)
leaving - 0.03% (2)
removes - 0.03% (2)
levels - 0.03% (2)
256 - 0.03% (2)
achieve - 0.03% (2)
usually - 0.03% (2)
compressed - 0.03% (2)
.ipa - 0.03% (2)
apple - 0.03% (2)
documentation, - 0.03% (2)
lots - 0.03% (2)
mathematical - 0.03% (2)
underlying - 0.03% (2)
nodes - 0.03% (2)
this, - 0.03% (2)
app. - 0.03% (2)
never - 0.03% (2)
loaded, - 0.03% (2)
feeding - 0.03% (2)
normally - 0.03% (2)
operation. - 0.03% (2)
decodejpeg - 0.03% (2)
write - 0.03% (2)
increase - 0.03% (2)
painful - 0.03% (2)
supported - 0.03% (2)
amounts - 0.03% (2)
limited - 0.03% (2)
who’s - 0.03% (2)
video - 0.03% (2)
bit. - 0.03% (2)
here, - 0.03% (2)
work? - 0.03% (2)
point. - 0.03% (2)
techniques - 0.03% (2)
teams. - 0.03% (2)
issue - 0.03% (2)
pure - 0.03% (2)
grow - 0.03% (2)
accelerate - 0.03% (2)
preserve - 0.03% (2)
challenge - 0.03% (2)
modern - 0.03% (2)
talked - 0.03% (2)
representations - 0.03% (2)
there. - 0.03% (2)
vision - 0.03% (2)
though. - 0.03% (2)
are! - 0.03% (2)
bound - 0.03% (2)
data, - 0.03% (2)
relationships - 0.03% (2)
lifting - 0.03% (2)
heavy - 0.03% (2)
hundred - 0.03% (2)
something - 0.03% (2)
adjective - 0.03% (2)
weights, - 0.03% (2)
inputs. - 0.03% (2)
three - 0.03% (2)
/tmp/inceptionv3.tgz - 0.03% (2)
conversions - 0.03% (2)
internal - 0.03% (2)
converted - 0.03% (2)
knows - 0.03% (2)
multiplication, - 0.03% (2)
writing - 0.03% (2)
stored - 0.03% (2)
first, - 0.03% (2)
graph, - 0.03% (2)
results. - 0.03% (2)
exactly - 0.03% (2)
runs - 0.03% (2)
inference. - 0.03% (2)
ccd - 0.03% (2)
converting - 0.03% (2)
higher - 0.03% (2)
experiments - 0.03% (2)
systems - 0.03% (2)
(which - 0.03% (2)
benefit - 0.03% (2)
zip - 0.03% (2)
numbers, - 0.03% (2)
single - 0.03% (2)
space - 0.03% (2)
low-precision - 0.03% (2)
seem - 0.03% (2)
yourself, - 0.03% (2)
classify - 0.03% (2)
costs - 0.03% (2)
box. - 0.03% (2)
faster. - 0.03% (2)
elements - 0.03% (2)
complicated - 0.03% (2)
flexible - 0.03% (2)
differences - 0.03% (2)
fewer - 0.03% (2)
moving - 0.03% (2)
pixels - 0.03% (2)
drawing - 0.03% (2)
decisions - 0.03% (2)
figure - 0.03% (2)
soon - 0.03% (2)
reading - 0.03% (2)
jobs - 0.03% (2)
deal - 0.03% (2)
instructions - 0.03% (2)
recipes - 0.03% (2)
applications. - 0.03% (2)
differently - 0.03% (2)
designed - 0.03% (2)
screen. - 0.03% (2)
graphics - 0.03% (2)
overview - 0.03% (2)
that, - 0.03% (2)
answer - 0.03% (2)
component - 0.03% (2)
shared - 0.03% (2)
pack - 0.03% (2)
took - 0.03% (2)
built - 0.03% (2)
instances - 0.03% (2)
random - 0.03% (2)
labor - 0.03% (2)
illustrate - 0.03% (2)
women - 0.03% (2)
commonly - 0.03% (2)
interesting - 0.03% (2)
written - 0.03% (2)
tackle - 0.03% (2)
fundamental - 0.03% (2)
sound - 0.03% (2)
tensorflow. - 0.03% (2)
sentence - 0.03% (2)
handle - 0.03% (2)
tried - 0.03% (2)
be. - 0.03% (2)
platform - 0.03% (2)
variety - 0.03% (2)
semantic - 0.03% (2)
fade - 0.03% (2)
mcparseface, - 0.03% (2)
girls, - 0.03% (2)
comes - 0.03% (2)
automatically. - 0.03% (2)
scalability - 0.03% (2)
larger - 0.03% (2)
wordpress.com. - 0.03% (2)
of the - 0.58% (44)
at the - 0.4% (30)
if you - 0.38% (29)
in the - 0.33% (25)
lot of - 0.3% (23)
to the - 0.3% (23)
need to - 0.29% (22)
that the - 0.25% (19)
that i - 0.22% (17)
and the - 0.2% (15)
you can - 0.18% (14)
or the - 0.16% (12)
machine learning - 0.16% (12)
our own - 0.15% (11)
on the - 0.15% (11)
tensorflow for - 0.15% (11)
for the - 0.15% (11)
the model - 0.15% (11)
is that - 0.13% (10)
able to - 0.13% (10)
pete warden - 0.13% (10)
your own - 0.13% (10)
deep learning - 0.13% (10)
with the - 0.13% (10)
neural network - 0.13% (10)
this is - 0.12% (9)
on that - 0.12% (9)
the file - 0.12% (9)
into the - 0.12% (9)
that we - 0.12% (9)
to get - 0.11% (8)
with a - 0.11% (8)
and max - 0.11% (8)
and then - 0.11% (8)
to see - 0.11% (8)
there are - 0.11% (8)
all of - 0.11% (8)
learning is - 0.11% (8)
that are - 0.11% (8)
make sure - 0.11% (8)
from the - 0.09% (7)
2016 by - 0.09% (7)
we need - 0.09% (7)
by pete - 0.09% (7)
warden in - 0.09% (7)
in uncategorized - 0.09% (7)
and output - 0.09% (7)
all the - 0.09% (7)
the tensor - 0.09% (7)
lot more - 0.09% (7)
the weights - 0.09% (7)
the same - 0.09% (7)
they can - 0.09% (7)
use the - 0.09% (7)
the trail - 0.09% (7)
because i - 0.09% (7)
you should - 0.09% (7)
you have - 0.09% (7)
neural networks - 0.09% (7)
it was - 0.09% (7)
the quantized - 0.08% (6)
and maximum - 0.08% (6)
eight bit - 0.08% (6)
have a - 0.08% (6)
by the - 0.08% (6)
is the - 0.08% (6)
bazel build - 0.08% (6)
so the - 0.08% (6)
for mobile - 0.08% (6)
the original - 0.08% (6)
floating point - 0.08% (6)
them in - 0.08% (6)
that can - 0.08% (6)
set of - 0.08% (6)
how to - 0.08% (6)
the min - 0.08% (6)
how you - 0.08% (6)
that they - 0.08% (6)
you need - 0.08% (6)
model file - 0.08% (6)
one of - 0.08% (6)
see the - 0.08% (6)
tend to - 0.08% (6)
because the - 0.08% (6)
to run - 0.08% (6)
operations that - 0.08% (6)
we use - 0.07% (5)
for poets - 0.07% (5)
all sorts - 0.07% (5)
that you - 0.07% (5)
that’s a - 0.07% (5)
to build - 0.07% (5)
you do - 0.07% (5)
if you’re - 0.07% (5)
to make - 0.07% (5)
so it’s - 0.07% (5)
run the - 0.07% (5)
we also - 0.07% (5)
the input - 0.07% (5)
and run - 0.07% (5)
the result - 0.07% (5)
model is - 0.07% (5)
inputs and - 0.07% (5)
a great - 0.07% (5)
advantage of - 0.07% (5)
on tensorflow - 0.07% (5)
sorts of - 0.07% (5)
in tensorflow - 0.07% (5)
a large - 0.07% (5)
that use - 0.05% (4)
so that - 0.05% (4)
is one - 0.05% (4)
rather than - 0.05% (4)
means that - 0.05% (4)
static nsstring* - 0.05% (4)
model files - 0.05% (4)
the top - 0.05% (4)
going to - 0.05% (4)
way to - 0.05% (4)
the tensorflow - 0.05% (4)
a folder - 0.05% (4)
your model - 0.05% (4)
on disk - 0.05% (4)
as the - 0.05% (4)
have been - 0.05% (4)
to take - 0.05% (4)
this can - 0.05% (4)
needed to - 0.05% (4)
version of - 0.05% (4)
easy to - 0.05% (4)
only a - 0.05% (4)
bazel-bin/tensorflow/examples/label_image/label_image \ - 0.05% (4)
i know - 0.05% (4)
the range - 0.05% (4)
been a - 0.05% (4)
sure that - 0.05% (4)
in machine - 0.05% (4)
doing a - 0.05% (4)
i’ll be - 0.05% (4)
the next - 0.05% (4)
you’ll see - 0.05% (4)
from a - 0.05% (4)
the results - 0.05% (4)
an example - 0.05% (4)
i would - 0.05% (4)
millions of - 0.05% (4)
the minimum - 0.05% (4)
float value - 0.05% (4)
minimum and - 0.05% (4)
to convert - 0.05% (4)
own model - 0.05% (4)
kind of - 0.05% (4)
through the - 0.05% (4)
don’t have - 0.05% (4)
the quantization - 0.05% (4)
it also - 0.04% (3)
because it’s - 0.04% (3)
applications i - 0.04% (3)
the screen - 0.04% (3)
think about - 0.04% (3)
that run - 0.04% (3)
men and - 0.04% (3)
for you - 0.04% (3)
part of - 0.04% (3)
and gpus - 0.04% (3)
very different - 0.04% (3)
as you - 0.04% (3)
starting to - 0.04% (3)
gpus are - 0.04% (3)
across a - 0.04% (3)
up the - 0.04% (3)
to break - 0.04% (3)
in more - 0.04% (3)
over the - 0.04% (3)
networks i - 0.04% (3)
the whole - 0.04% (3)
warden's blog - 0.04% (3)
of different - 0.04% (3)
do that - 0.04% (3)
thing about - 0.04% (3)
or matrix - 0.04% (3)
8 bits - 0.04% (3)
much more - 0.04% (3)
the actual - 0.04% (3)
but we - 0.04% (3)
the rounded - 0.04% (3)
on mobile - 0.04% (3)
working on - 0.04% (3)
most of - 0.04% (3)
our machine - 0.04% (3)
have to - 0.04% (3)
it can - 0.04% (3)
anything in - 0.04% (3)
be able - 0.04% (3)
to your - 0.04% (3)
they are - 0.04% (3)
photo by - 0.04% (3)
to help - 0.04% (3)
but there - 0.04% (3)
excited to - 0.04% (3)
and i’ll - 0.04% (3)
as well. - 0.04% (3)
pete warden's - 0.04% (3)
that we’re - 0.04% (3)
the large - 0.04% (3)
them with - 0.04% (3)
of these - 0.04% (3)
the approach - 0.04% (3)
i also - 0.04% (3)
do the - 0.04% (3)
to explore - 0.04% (3)
so much - 0.04% (3)
in this - 0.04% (3)
of deep - 0.04% (3)
of noise - 0.04% (3)
just another - 0.04% (3)
in float - 0.04% (3)
up with - 0.04% (3)
can get - 0.04% (3)
pick a - 0.04% (3)
to quantize - 0.04% (3)
you run - 0.04% (3)
into a - 0.04% (3)
bit calculations - 0.04% (3)
you look - 0.04% (3)
input and - 0.04% (3)
that have - 0.04% (3)
look at - 0.04% (3)
though the - 0.04% (3)
down to - 0.04% (3)
want to - 0.04% (3)
the kind - 0.04% (3)
love to - 0.04% (3)
more detail - 0.04% (3)
of those - 0.04% (3)
to load - 0.04% (3)
an image - 0.04% (3)
\ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg - 0.04% (3)
if the - 0.04% (3)
possible to - 0.04% (3)
\ --labels=/tf_files/retrained_labels.txt - 0.04% (3)
new model - 0.04% (3)
to give - 0.04% (3)
\ --output_layer=final_result - 0.04% (3)
produce a - 0.04% (3)
--image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ - 0.04% (3)
to work - 0.04% (3)
data folder - 0.04% (3)
build the - 0.04% (3)
--labels=/tf_files/retrained_labels.txt \ - 0.04% (3)
run this - 0.04% (3)
--output_layer=final_result \ - 0.04% (3)
const int - 0.04% (3)
the app, - 0.04% (3)
file to - 0.04% (3)
the files - 0.04% (3)
number of - 0.04% (3)
of your - 0.04% (3)
the labels - 0.04% (3)
at your - 0.04% (3)
what you - 0.04% (3)
for any - 0.04% (3)
which is - 0.04% (3)
than the - 0.04% (3)
to recognize - 0.04% (3)
this means - 0.04% (3)
know that - 0.04% (3)
that uses - 0.04% (3)
support for - 0.04% (3)
this tutorial - 0.04% (3)
slightly different - 0.04% (3)
and so - 0.04% (3)
of tensorflow - 0.04% (3)
the terminal - 0.04% (3)
the framework - 0.04% (3)
the overall - 0.04% (3)
the operation - 0.04% (3)
at this - 0.04% (3)
about the - 0.04% (3)
should see - 0.04% (3)
when the - 0.03% (2)
we don’t - 0.03% (2)
values in - 0.03% (2)
of values - 0.03% (2)
actual range - 0.03% (2)
would be - 0.03% (2)
16 bits - 0.03% (2)
to avoid - 0.03% (2)
matrix multiplication, - 0.03% (2)
you’re doing - 0.03% (2)
we know - 0.03% (2)
having to - 0.03% (2)
issue for - 0.03% (2)
to determine - 0.03% (2)
than 8 - 0.03% (2)
which we - 0.03% (2)
do for - 0.03% (2)
have more - 0.03% (2)
values are - 0.03% (2)
convert back - 0.03% (2)
reduce the - 0.03% (2)
network models - 0.03% (2)
that form - 0.03% (2)
matrix multiplications - 0.03% (2)
in size - 0.03% (2)
download size - 0.03% (2)
the precision - 0.03% (2)
of noise, - 0.03% (2)
app that - 0.03% (2)
and activation - 0.03% (2)
a representation - 0.03% (2)
into eight-bit - 0.03% (2)
lots of - 0.03% (2)
gives a - 0.03% (2)
once the - 0.03% (2)
ways to - 0.03% (2)
on how - 0.03% (2)
them into - 0.03% (2)
those in - 0.03% (2)
to pick - 0.03% (2)
the retrained - 0.03% (2)
holding the - 0.03% (2)
having a - 0.03% (2)
the one - 0.03% (2)
the future - 0.03% (2)
run label_image - 0.03% (2)
that aren’t - 0.03% (2)
for operations - 0.03% (2)
from float - 0.03% (2)
the operations - 0.03% (2)
on. the - 0.03% (2)
should be - 0.03% (2)
to calculate - 0.03% (2)
more expensive - 0.03% (2)
across the - 0.03% (2)
float values - 0.03% (2)
bit depths - 0.03% (2)
can represent - 0.03% (2)
and an - 0.03% (2)
for example, - 0.03% (2)
to understand - 0.03% (2)
are much - 0.03% (2)
the buffer - 0.03% (2)
the latest - 0.03% (2)
them through - 0.03% (2)
could train - 0.03% (2)
couple of - 0.03% (2)
the past - 0.03% (2)
amounts of - 0.03% (2)
on tiny - 0.03% (2)
looking for - 0.03% (2)
a comment - 0.03% (2)
are some - 0.03% (2)
uncategorized leave - 0.03% (2)
docker image. - 0.03% (2)
relies on - 0.03% (2)
once you’re - 0.03% (2)
just on - 0.03% (2)
unless you’re - 0.03% (2)
should find - 0.03% (2)
before i - 0.03% (2)
you’ll need - 0.03% (2)
retrained_graph.pb file - 0.03% (2)
the docker - 0.03% (2)
nearby hikes - 0.03% (2)
at wordpress.com. - 0.03% (2)
blog at - 0.03% (2)
you could - 0.03% (2)
taking a - 0.03% (2)
rain a - 0.03% (2)
the effort - 0.03% (2)
model into - 0.03% (2)
san francisco, - 0.03% (2)
vision summit - 0.03% (2)
for poets, - 0.03% (2)
almost every - 0.03% (2)
leave a - 0.03% (2)
montara mountain - 0.03% (2)
folder in - 0.03% (2)
are to - 0.03% (2)
embedded devices - 0.03% (2)
a fantastic - 0.03% (2)
know how - 0.03% (2)
i think - 0.03% (2)
are common - 0.03% (2)
use to - 0.03% (2)
i highly - 0.03% (2)
devices. if - 0.03% (2)
hoping will - 0.03% (2)
reference implementation - 0.03% (2)
it for - 0.03% (2)
performance on - 0.03% (2)
still need - 0.03% (2)
done in - 0.03% (2)
command to - 0.03% (2)
we can - 0.03% (2)
the float - 0.03% (2)
minimum from - 0.03% (2)
rounded version - 0.03% (2)
the important - 0.03% (2)
code, but - 0.03% (2)
i mentioned - 0.03% (2)
was the - 0.03% (2)
into machine - 0.03% (2)
from traditional - 0.03% (2)
things are - 0.03% (2)
the default - 0.03% (2)
there is - 0.03% (2)
offer a - 0.03% (2)
i’ve seen - 0.03% (2)
find a - 0.03% (2)
teach yourself - 0.03% (2)
we’re going - 0.03% (2)
of kaggle - 0.03% (2)
but if - 0.03% (2)
wanted to - 0.03% (2)
with one - 0.03% (2)
does the - 0.03% (2)
to teach - 0.03% (2)
build tensorflow/examples/label_image:label_image - 0.03% (2)
if your - 0.03% (2)
of their - 0.03% (2)
point at - 0.03% (2)
share the - 0.03% (2)
use in - 0.03% (2)
this gives - 0.03% (2)
calculations are - 0.03% (2)
this to - 0.03% (2)
is much - 0.03% (2)
size of - 0.03% (2)
and how - 0.03% (2)
applied to - 0.03% (2)
know what - 0.03% (2)
the file. - 0.03% (2)
been able - 0.03% (2)
have your - 0.03% (2)
i took - 0.03% (2)
own model, - 0.03% (2)
variety of - 0.03% (2)
seen the - 0.03% (2)
299; const - 0.03% (2)
= 299; - 0.03% (2)
= 128.0f; - 0.03% (2)
talk to - 0.03% (2)
when i - 0.03% (2)
to update - 0.03% (2)
than any - 0.03% (2)
const float - 0.03% (2)
data folder, - 0.03% (2)
networks with - 0.03% (2)
quantize neural - 0.03% (2)
in xcode, - 0.03% (2)
up with! - 0.03% (2)
you come - 0.03% (2)
see what - 0.03% (2)
try on - 0.03% (2)
finder window - 0.03% (2)
include the - 0.03% (2)
relationships in - 0.03% (2)
it’s possible - 0.03% (2)
like this: - 0.03% (2)
the parser - 0.03% (2)
set up - 0.03% (2)
this should - 0.03% (2)
what i - 0.03% (2)
the graph - 0.03% (2)
1 comment - 0.03% (2)
128.0f; const - 0.03% (2)
embedded vision - 0.03% (2)
more than - 0.03% (2)
large set - 0.03% (2)
comments photo - 0.03% (2)
much faster. - 0.03% (2)
asked me - 0.03% (2)
video card - 0.03% (2)
are very - 0.03% (2)
that costs - 0.03% (2)
large number - 0.03% (2)
example you - 0.03% (2)
involves a - 0.03% (2)
try to - 0.03% (2)
waiting for - 0.03% (2)
give an - 0.03% (2)
they’re designed - 0.03% (2)
of instructions - 0.03% (2)
so they - 0.03% (2)
on cpus - 0.03% (2)
17, 2016 - 0.03% (2)
are gpus, - 0.03% (2)
const std::string - 0.03% (2)
are set - 0.03% (2)
another fad - 0.03% (2)
not just - 0.03% (2)
why deep - 0.03% (2)
mcparseface, and - 0.03% (2)
girls, parser - 0.03% (2)
deep learning, - 0.03% (2)
it the - 0.03% (2)
what makes - 0.03% (2)
faster. the - 0.03% (2)
hoping to - 0.03% (2)
on android - 0.03% (2)
this scalability - 0.03% (2)
or for - 0.03% (2)
the high - 0.03% (2)
i’m excited - 0.03% (2)
come up - 0.03% (2)
but for - 0.03% (2)
we’ve been - 0.03% (2)
get it - 0.03% (2)
but with - 0.03% (2)
many more - 0.03% (2)
lower bit - 0.03% (2)
the os - 0.03% (2)
your models - 0.03% (2)
will help - 0.03% (2)
into memory - 0.03% (2)
over to - 0.03% (2)
eight-bit calculations - 0.03% (2)
that do - 0.03% (2)
before the - 0.03% (2)
can also - 0.03% (2)
on ram - 0.03% (2)
use of - 0.03% (2)
since it - 0.03% (2)
in one - 0.03% (2)
another reason - 0.03% (2)
that your - 0.03% (2)
file on - 0.03% (2)
to handle - 0.03% (2)
the memory - 0.03% (2)
the exact - 0.03% (2)
much better - 0.03% (2)
float inputs - 0.03% (2)
with float - 0.03% (2)
don’t compress - 0.03% (2)
what they - 0.03% (2)
example of - 0.03% (2)
move the - 0.03% (2)
of operations - 0.03% (2)
with all - 0.03% (2)
in floating-point - 0.03% (2)
a small - 0.03% (2)
to apply - 0.03% (2)
very similar - 0.03% (2)
here’s an - 0.03% (2)
may have - 0.03% (2)
thanks to - 0.03% (2)
with eight - 0.03% (2)
runs the - 0.03% (2)
out for - 0.03% (2)
would represent - 0.03% (2)
i’ve talked - 0.03% (2)
that means - 0.03% (2)
networks is - 0.03% (2)
go into - 0.03% (2)
focus on - 0.03% (2)
on them - 0.03% (2)
perform calculations - 0.03% (2)
to store - 0.03% (2)
become a - 0.03% (2)
needed for - 0.03% (2)
is very - 0.03% (2)
the computation - 0.03% (2)
a good - 0.03% (2)
should have - 0.03% (2)
let xcode - 0.03% (2)
of what - 0.03% (2)
an overview - 0.03% (2)
i wanted - 0.03% (2)
it should - 0.03% (2)
model and - 0.03% (2)
the camera - 0.03% (2)
to 6.0 - 0.03% (2)
also need - 0.03% (2)
an eight-bit - 0.03% (2)
plain graphdef - 0.03% (2)
each layer - 0.03% (2)
different floating - 0.03% (2)
all slightly - 0.03% (2)
the command - 0.03% (2)
since there - 0.03% (2)
on disk, - 0.03% (2)
then you - 0.03% (2)
numerical formats - 0.03% (2)
noise, and - 0.03% (2)
on your - 0.03% (2)
seem to - 0.03% (2)
the training - 0.03% (2)
it and - 0.03% (2)
the network - 0.03% (2)
inputs. if - 0.03% (2)
post to - 0.03% (2)
by pete warden - 0.09% (7)
2016 by pete - 0.09% (7)
warden in uncategorized - 0.09% (7)
pete warden in - 0.09% (7)
a lot more - 0.09% (7)
all of the - 0.08% (6)
tensorflow for mobile - 0.08% (6)
need to do - 0.07% (5)
we need to - 0.07% (5)
all sorts of - 0.07% (5)
if you have - 0.07% (5)
to make sure - 0.05% (4)
for mobile poe… - 0.05% (4)
tensorflow for poets - 0.05% (4)
machine learning is - 0.05% (4)
in machine learning - 0.05% (4)
one of the - 0.05% (4)
your own model - 0.05% (4)
minimum and maximum - 0.05% (4)
deep learning is - 0.05% (4)
make sure that - 0.05% (4)
pete warden's blog - 0.04% (3)
if you look - 0.04% (3)
you can see - 0.04% (3)
that we’re hoping - 0.04% (3)
it’s easy to - 0.04% (3)
look at the - 0.04% (3)
if you don’t - 0.04% (3)
be able to - 0.04% (3)
tend to be - 0.04% (3)
that can be - 0.04% (3)
the minimum and - 0.04% (3)
need to be - 0.04% (3)
i’d love to - 0.04% (3)
the model file - 0.04% (3)
here’s how you - 0.04% (3)
--labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg - 0.04% (3)
bazel-bin/tensorflow/examples/label_image/label_image \ --output_layer=final_result - 0.04% (3)
\ --output_layer=final_result \ - 0.04% (3)
\ --labels=/tf_files/retrained_labels.txt \ - 0.04% (3)
you should see - 0.04% (3)
is that they - 0.04% (3)
the model is - 0.04% (3)
-3.0 to 6.0 - 0.03% (2)
inputs. if you - 0.03% (2)
to pick a - 0.03% (2)
pick a representation - 0.03% (2)
values that are - 0.03% (2)
are done in - 0.03% (2)
neural network models - 0.03% (2)
weights and activation - 0.03% (2)
calculations are done - 0.03% (2)
offer a lot - 0.03% (2)
where all of - 0.03% (2)
values in the - 0.03% (2)
a new model - 0.03% (2)
the number of - 0.03% (2)
the min and - 0.03% (2)
perform calculations on - 0.03% (2)
with float inputs - 0.03% (2)
i highly recommend - 0.03% (2)
than 8 bits - 0.03% (2)
i mentioned above, - 0.03% (2)
break into machine - 0.03% (2)
neural networks with - 0.03% (2)
how to quantize - 0.03% (2)
san francisco, so - 0.03% (2)
uncategorized leave a - 0.03% (2)
embedded vision summit - 0.03% (2)
i’m excited to - 0.03% (2)
any of the - 0.03% (2)
leave a comment - 0.03% (2)
you’re doing a - 0.03% (2)
how you can - 0.03% (2)
the actual range - 0.03% (2)
way to teach - 0.03% (2)
have been a - 0.03% (2)
to teach yourself - 0.03% (2)
to share the - 0.03% (2)
into machine learning - 0.03% (2)
how to break - 0.03% (2)
minimum from the - 0.03% (2)
the rounded version - 0.03% (2)
rounded version of - 0.03% (2)
as i mentioned - 0.03% (2)
so that the - 0.03% (2)
have a lot - 0.03% (2)
over the past - 0.03% (2)
to get it - 0.03% (2)
into the app - 0.03% (2)
need to update - 0.03% (2)
it should include - 0.03% (2)
xcode know that - 0.03% (2)
the data folder - 0.03% (2)
that it should - 0.03% (2)
let xcode know - 0.03% (2)
still need to - 0.03% (2)
you should find - 0.03% (2)
and run the - 0.03% (2)
that we can - 0.03% (2)
so if you - 0.03% (2)
the weights are - 0.03% (2)
the os can - 0.03% (2)
different floating point - 0.03% (2)
// if you - 0.03% (2)
87mb in size - 0.03% (2)
model is still - 0.03% (2)
to reduce the - 0.03% (2)
for operations that - 0.03% (2)
version of tensorflow - 0.03% (2)
build tensorflow/examples/label_image:label_image bazel-bin/tensorflow/examples/label_image/label_image - 0.03% (2)
to take advantage - 0.03% (2)
but if you - 0.03% (2)
we’re going to - 0.03% (2)
and make sure - 0.03% (2)
in your own - 0.03% (2)
you could train - 0.03% (2)
tensorflow for poets, - 0.03% (2)
of the input - 0.03% (2)
have your own - 0.03% (2)
quantize neural networks - 0.03% (2)
much faster. the - 0.03% (2)
you come up - 0.03% (2)
to see what - 0.03% (2)
it’s possible to - 0.03% (2)
this is the - 0.03% (2)
to men and - 0.03% (2)
that’s a big - 0.03% (2)
i’ve seen the - 0.03% (2)
in uncategorized 1 - 0.03% (2)
just another fad - 0.03% (2)
learning is not - 0.03% (2)
and why deep - 0.03% (2)
girls, parser mcparseface, - 0.03% (2)
but for the - 0.03% (2)
so they can - 0.03% (2)
your own model, - 0.03% (2)
able to do - 0.03% (2)
involves a lot - 0.03% (2)
to the next - 0.03% (2)
are much more - 0.03% (2)
give an overview - 0.03% (2)
comments photo by - 0.03% (2)
are gpus, anyway? - 0.03% (2)
what you come - 0.03% (2)
in tensorflow for - 0.03% (2)
the kind of - 0.03% (2)
an example of - 0.03% (2)
= 128.0f; const - 0.03% (2)
= 299; const - 0.03% (2)
blog at wordpress.com. - 0.03% (2)

Here you can find chart of all your popular one, two and three word phrases. Google and others search engines means your page is about words you use frequently.

Copyright © 2015-2016 hupso.pl. All rights reserved. FB | +G | Twitter

Hupso.pl jest serwisem internetowym, w którym jednym kliknieciem możesz szybko i łatwo sprawdź stronę www pod kątem SEO. Oferujemy darmowe pozycjonowanie stron internetowych oraz wycena domen i stron internetowych. Prowadzimy ranking polskich stron internetowych oraz ranking stron alexa.