3.58 score from hupso.pl for:
petewarden.com



HTML Content


Title pete warden's blog

Length: 24, Words: 4
Description ever tried. ever failed. no matter. try again. fail again. fail better.

Length: 71, Words: 12
Keywords pusty
Robots
Charset UTF-8
Og Meta - Title exist
Og Meta - Description exist
Og Meta - Site name exist
Tytuł powinien zawierać pomiędzy 10 a 70 znaków (ze spacjami), a mniej niż 12 słów w długości.
Meta opis powinien zawierać pomiędzy 50 a 160 znaków (łącznie ze spacjami), a mniej niż 24 słów w długości.
Kodowanie znaków powinny być określone , UTF-8 jest chyba najlepszy zestaw znaków, aby przejść z powodu UTF-8 jest bardziej międzynarodowy kodowaniem.
Otwarte obiekty wykresu powinny być obecne w stronie internetowej (więcej informacji na temat protokołu OpenGraph: http://ogp.me/)

SEO Content

Words/Characters 6345
Text/HTML 35.79 %
Headings H1 14
H2 1
H3 1
H4 0
H5 0
H6 0
H1
pete warden's blog
how to label images quickly
why deep learning needs assembler hackers
rewriting tensorflow graphs with the gtt
ai and unreliable electronics (*batteries not included)
tensorflow for mobile poets
what are gpus, anyway?
bossy girls, parser mcparseface, and why deep learning is not just another fad
post navigation
follow @petewarden on twitter
recent posts
recent comments
archives
footer menu
H2
ever tried. ever failed. no matter. try again. fail again. fail better.
H3
pete warden's blog
H4
H5
H6
strong
imagenet_comp_graph_label_strings.txt
tensorflow_inception_graph.pb
b
i
em imagenet_comp_graph_label_strings.txt
tensorflow_inception_graph.pb
Bolds strong 2
b 0
i 0
em 2
Zawartość strony internetowej powinno zawierać więcej niż 250 słów, z stopa tekst / kod jest wyższy niż 20%.
Pozycji używać znaczników (h1, h2, h3, ...), aby określić temat sekcji lub ustępów na stronie, ale zwykle, użyj mniej niż 6 dla każdego tagu pozycje zachować swoją stronę zwięzły.
Styl używać silnych i kursywy znaczniki podkreślić swoje słowa kluczowe swojej stronie, ale nie nadużywać (mniej niż 16 silnych tagi i 16 znaczników kursywy)

Statystyki strony

twitter:title pusty
twitter:description pusty
google+ itemprop=name pusty
Pliki zewnętrzne 29
Pliki CSS 9
Pliki javascript 20
Plik należy zmniejszyć całkowite odwołanie plików (CSS + JavaScript) do 7-8 maksymalnie.

Linki wewnętrzne i zewnętrzne

Linki 232
Linki wewnętrzne 4
Linki zewnętrzne 228
Linki bez atrybutu Title 207
Linki z atrybutem NOFOLLOW 0
Linki - Użyj atrybutu tytuł dla każdego łącza. Nofollow link jest link, który nie pozwala wyszukiwarkom boty zrealizują są odnośniki no follow. Należy zwracać uwagę na ich użytkowania

Linki wewnętrzne

skip to content #content
my tweets
#page
cancel #

Linki zewnętrzne

pete warden's blog https://petewarden.com/
home https://petewarden.com/
about https://petewarden.com/about/
https://petewarden.com/2017/04/26/how-to-label-images-quickly/
how to label images quickly https://petewarden.com/2017/04/26/how-to-label-images-quickly/
april 26, 2017 https://petewarden.com/2017/04/26/how-to-label-images-quickly/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
1 comment https://petewarden.com/2017/04/26/how-to-label-images-quickly/#comments
flowers set of images http://download.tensorflow.org/example_images/flower_photos.tgz
tensorflow for poets https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#0
these instructions for setting up a keyboard shortcut to open the tags menu for an item http://hints.macworld.com/article.php?story=20140504114022595
on twitter https://twitter.com/petewarden
https://petewarden.com/2017/01/03/why-deep-learning-needs-assembler-hackers/
why deep learning needs assembler hackers https://petewarden.com/2017/01/03/why-deep-learning-needs-assembler-hackers/
january 3, 2017 https://petewarden.com/2017/01/03/why-deep-learning-needs-assembler-hackers/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
3 comments https://petewarden.com/2017/01/03/why-deep-learning-needs-assembler-hackers/#comments
- https://petewarden.files.wordpress.com/2017/01/screen-shot-2017-01-03-at-9-58-12-am.pnghttps://www.flickr.com/photos/daniel-lopez/4154216384/in/photolist-7k6rnq-clozjk-phdkyc-dpkenm-9vjjao-8n81vr-qjs5jq-6dmptn-peemle-nncbjr-nj8x4t-eeiaym-a5n2cd-eestbe-ocquoo-dkob7a-efxkan-cfqkl7-buomx4-ouc8lu-7tfbsx-cmkwx7-guj9nc-ih1u42-6xtvw6-so4b5x-83qbtc-b1wuje-p33vnv-6qh2vs-h2dund-efduys-pzyduh-6vxeby-rp9tgk-dmd2z1-e96kup-bkhv7a-gr3k8q-aadinw-ddk4t5-dm7sjv-hkanqz-mftrhh-oi9xxt-rpqsgd-rve8en-ogcwnq-q6kqyb-datyzd
photo by daniel lopez https://www.flickr.com/photos/daniel-lopez/4154216384/in/photolist-7k6rnq-clozjk-phdkyc-dpkenm-9vjjao-8n81vr-qjs5jq-6dmptn-peemle-nncbjr-nj8x4t-eeiaym-a5n2cd-eestbe-ocquoo-dkob7a-efxkan-cfqkl7-buomx4-ouc8lu-7tfbsx-cmkwx7-guj9nc-ih1u42-6xtvw6-so4b5x-83qbtc-b1wuje-p33vnv-6qh2vs-h2dund-efduys-pzyduh-6vxeby-rp9tgk-dmd2z1-e96kup-bkhv7a-gr3k8q-aadinw-ddk4t5-dm7sjv-hkanqz-mftrhh-oi9xxt-rpqsgd-rve8en-ogcwnq-q6kqyb-datyzd
the gemm matrix multiply function https://petewarden.com/2015/10/25/an-engineers-guide-to-gemm/
powers deep learning https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/
gotoblas https://en.wikipedia.org/wiki/gotoblas
openblas http://www.openblas.net/
eigen http://eigen.tuxfamily.org/index.php?title=main_page
gemmlowp https://github.com/google/gemmlowp
scott gray https://twitter.com/scottgray76
the winograd algorithm https://www.nervanasys.com/winograd-2/
https://petewarden.com/2016/12/30/rewriting-tensorflow-graphs-with-the-gtt/
rewriting tensorflow graphs with the gtt https://petewarden.com/2016/12/30/rewriting-tensorflow-graphs-with-the-gtt/
december 30, 2016 https://petewarden.com/2016/12/30/rewriting-tensorflow-graphs-with-the-gtt/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
1 comment https://petewarden.com/2016/12/30/rewriting-tensorflow-graphs-with-the-gtt/#comments
- https://www.flickr.com/photos/sdstrowes/8248580877/in/photolist-dyu8hf-4lznr6-4mx9v5-7afs9z-3mcfgh-dtpiy7-ahr9hi-dtieem-c3qadw-2exrv-pv1e43-cwewjf-7neaqj-9pf85a-svebgv-azft21-puvdfk-qs5f2s-pnunw-fhfzmu-oygpqd-aostnc-bbjpvi-fcznim-8fcw89-8jgii9-hsaz4o-3krudh-dzanga-sgoaye-mm6hct-7dgfe7-krrnmc-564rfg-4nhcwe-jvxcqb-bopxrg-8mps3u-68crnm-bo3thm-dbvfkk-eaagwp-kwn32f-7ahkgy-asbnwn-7vlz96-4kvpot-pnunm-9rjv6c-crkoaq
photo by stephen d. strowes https://www.flickr.com/photos/sdstrowes/8248580877/in/photolist-dyu8hf-4lznr6-4mx9v5-7afs9z-3mcfgh-dtpiy7-ahr9hi-dtieem-c3qadw-2exrv-pv1e43-cwewjf-7neaqj-9pf85a-svebgv-azft21-puvdfk-qs5f2s-pnunw-fhfzmu-oygpqd-aostnc-bbjpvi-fcznim-8fcw89-8jgii9-hsaz4o-3krudh-dzanga-sgoaye-mm6hct-7dgfe7-krrnmc-564rfg-4nhcwe-jvxcqb-bopxrg-8mps3u-68crnm-bo3thm-dbvfkk-eaagwp-kwn32f-7ahkgy-asbnwn-7vlz96-4kvpot-pnunm-9rjv6c-crkoaq
trimming parts of the graph that aren’t needed for just running inference https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/readme.md#strip_unused_nodes
folding batch normalization nodes into precalculated weights https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/readme.md#fold_batch_norms
turning constant sub expressions into single nodes https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/readme.md#fold_constants
rewriting calculations in eight bit https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/readme.md#eight-bit-calculations
graph transform tool https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/readme.md
matching operators https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/readme.md#pattern-syntax
@petewarden https://github.com/petewarden
https://petewarden.com/2016/12/29/ai-and-unreliable-electronics-batteries-not-included/
ai and unreliable electronics (*batteries not included) https://petewarden.com/2016/12/29/ai-and-unreliable-electronics-batteries-not-included/
december 29, 2016 https://petewarden.com/2016/12/29/ai-and-unreliable-electronics-batteries-not-included/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
1 comment https://petewarden.com/2016/12/29/ai-and-unreliable-electronics-batteries-not-included/#comments
- https://www.flickr.com/photos/torley/8485063847/in/photolist-dvnasg-ddjzzj-qszml1-cfqc3s-dtxpth-ddebmt-ddebd8-ddebxz-crv4f9-pqdqjw-cptmqj-cqmwm1-jfvahc-bn4srz-pvpu1r-abyuup-bawnw8-nxnuzm-ejzqmv-akwaws-cdycf7-n9wwdv-bwt41f-9e3wwo-bmex5u-re1ftm-mhpzzk-da8emt-6hzisw-mhpabt-i9ycqj-86cjwp-byxmy8-9d3ut1-cnttn7-7hbhct-9qfuyj-cldp1u-8zqepc-avnxwq-nghuik-bbuzrk-a3gx1q-a3gwz3-cznanl-m8zseu-qskufm-gv5m7b-or1fuf-pmsfn8
picture by torley https://www.flickr.com/photos/torley/8485063847/in/photolist-dvnasg-ddjzzj-qszml1-cfqc3s-dtxpth-ddebmt-ddebd8-ddebxz-crv4f9-pqdqjw-cptmqj-cqmwm1-jfvahc-bn4srz-pvpu1r-abyuup-bawnw8-nxnuzm-ejzqmv-akwaws-cdycf7-n9wwdv-bwt41f-9e3wwo-bmex5u-re1ftm-mhpzzk-da8emt-6hzisw-mhpabt-i9ycqj-86cjwp-byxmy8-9d3ut1-cnttn7-7hbhct-9qfuyj-cldp1u-8zqepc-avnxwq-nghuik-bbuzrk-a3gx1q-a3gwz3-cznanl-m8zseu-qskufm-gv5m7b-or1fuf-pmsfn8
arm research summit https://developer.arm.com/research/summit
james myers http://www.arm.ecs.soton.ac.uk/people/james%20myers
smart sensors https://petewarden.com/2015/10/03/semantic-sensors/
unlikely to change soon https://www.technologyreview.com/s/534866/why-we-dont-have-battery-breakthroughs/
ble sending data just a foot draws more than 10 milliwatts https://petewarden.com/2015/10/08/smartphone-energy-consumption/
rowhammer https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
onur mutlu https://people.inf.ethz.ch/omutlu/
using deep learning and microphones to predict problems with machinery http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/deep-learning-ai-listens-to-machines-for-signs-of-trouble
https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
tensorflow for mobile poets https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
september 27, 2016 https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
32 comments https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/#comments
tensorflow for poets https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#0
tensorflow for poets https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html
https://www.youtube.com/watch?v=_bkzppniydo https://www.youtube.com/watch?v=_bkzppniydohttps://www.youtube.com/watch?v=_bkzppniydo
tensorflow/contrib/makefile/tf_op_files.txt https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/makefile/tf_op_files.txt
brew http://brew.sh/index.html
https://github.com/tensorflow/tensorflow https://github.com/tensorflow/tensorflow
https://petewarden.com/2016/05/17/what-are-gpus-anyway/
what are gpus, anyway? https://petewarden.com/2016/05/17/what-are-gpus-anyway/
may 17, 2016 https://petewarden.com/2016/05/17/what-are-gpus-anyway/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
4 comments https://petewarden.com/2016/05/17/what-are-gpus-anyway/#comments
photo by mark, vicki, ellaura, and mason https://www.flickr.com/photos/brown_family_album/4607229186/in/photolist-828fhy-5b1wvj-8ytney-828fxo-b99uck-56yzsr-4fvvgx-p8cms-bgrwkm-4jtpmc-9aios-9qszhw-8257fi-9aikq-8ytmbb-9aior-8tn9aq-djv982-6evr7n-9aikl-9aiof-2bgnen-lyxs4-6v5p4g-4fvv7m-6uzyxm-4pj8mq-668zgu-4pj68y-4pe1zv-4pe762-4pjbr5-4pj6vy-4pe4ei-4pe2wi-9jvm6b-6uzyy2-4pe1kx-4pe3kh-4pj8kl-4ctqb1-4jyu2c-9aimv-9ainv-9smduw-7e1wsf-5c3rfn-3icv1d-8pb4q-4pj7dq
https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/
bossy girls, parser mcparseface, and why deep learning is not just another fad https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/
may 15, 2016 https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/
pete warden https://petewarden.com/author/petewarden/
uncategorized https://petewarden.com/category/uncategorized/
1 comment https://petewarden.com/2016/05/15/bossy-girls-parser-mcparseface-and-why-deep-learning-is-not-just-another-fad/#comments
fuzzy logic https://en.wikipedia.org/wiki/fuzzy_logic
corba https://en.wikipedia.org/wiki/common_object_request_broker_architecture
semantic web https://en.wikipedia.org/wiki/semantic_web
tensorflow https://tensorflow.org/
tried to build approachable tutorials https://petewarden.com/2016/02/28/tensorflow-for-poets/
release parsey mcparseface http://googleresearch.blogspot.com/2016/05/announcing-syntaxnet-worlds-most.html
a great article on why bossy is so gendered https://linguisticpulse.com/2014/03/28/no-really-bossy-is-gendered/
download parser mcparseface https://github.com/tensorflow/models/tree/master/syntaxnet
« older posts https://petewarden.com/page/2/
rss - posts https://petewarden.com/feed/
how to label images quickly https://petewarden.com/2017/04/26/how-to-label-images-quickly/
why deep learning needs assembler hackers https://petewarden.com/2017/01/03/why-deep-learning-needs-assembler-hackers/
rewriting tensorflow graphs with the gtt https://petewarden.com/2016/12/30/rewriting-tensorflow-graphs-with-the-gtt/
ai and unreliable electronics (*batteries not included) https://petewarden.com/2016/12/29/ai-and-unreliable-electronics-batteries-not-included/
tensorflow for mobile poets https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
tensorflow for mobile poe… https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/comment-page-1/#comment-103036
how to label images quick… https://petewarden.com/2017/04/26/how-to-label-images-quickly/comment-page-1/#comment-103031
- https://petewarden.com/
pete warden https://petewarden.com/
tensorflow for mobile poe… https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/comment-page-1/#comment-103026
tensorflow for mobile poe… https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/comment-page-1/#comment-103018
- https://artoftheless.wordpress.com
krishna https://artoftheless.wordpress.com
how i ended up using s3 as my… https://petewarden.com/2010/10/01/how-i-ended-up-using-s3-as-my-database/comment-page-1/#comment-103015
april 2017 https://petewarden.com/2017/04/
january 2017 https://petewarden.com/2017/01/
december 2016 https://petewarden.com/2016/12/
september 2016 https://petewarden.com/2016/09/
may 2016 https://petewarden.com/2016/05/
april 2016 https://petewarden.com/2016/04/
march 2016 https://petewarden.com/2016/03/
february 2016 https://petewarden.com/2016/02/
november 2015 https://petewarden.com/2015/11/
october 2015 https://petewarden.com/2015/10/
september 2015 https://petewarden.com/2015/09/
august 2015 https://petewarden.com/2015/08/
may 2015 https://petewarden.com/2015/05/
april 2015 https://petewarden.com/2015/04/
march 2015 https://petewarden.com/2015/03/
january 2015 https://petewarden.com/2015/01/
december 2014 https://petewarden.com/2014/12/
november 2014 https://petewarden.com/2014/11/
october 2014 https://petewarden.com/2014/10/
september 2014 https://petewarden.com/2014/09/
august 2014 https://petewarden.com/2014/08/
july 2014 https://petewarden.com/2014/07/
june 2014 https://petewarden.com/2014/06/
may 2014 https://petewarden.com/2014/05/
april 2014 https://petewarden.com/2014/04/
march 2014 https://petewarden.com/2014/03/
february 2014 https://petewarden.com/2014/02/
january 2014 https://petewarden.com/2014/01/
december 2013 https://petewarden.com/2013/12/
november 2013 https://petewarden.com/2013/11/
october 2013 https://petewarden.com/2013/10/
september 2013 https://petewarden.com/2013/09/
august 2013 https://petewarden.com/2013/08/
july 2013 https://petewarden.com/2013/07/
june 2013 https://petewarden.com/2013/06/
may 2013 https://petewarden.com/2013/05/
april 2013 https://petewarden.com/2013/04/
march 2013 https://petewarden.com/2013/03/
february 2013 https://petewarden.com/2013/02/
january 2013 https://petewarden.com/2013/01/
november 2012 https://petewarden.com/2012/11/
october 2012 https://petewarden.com/2012/10/
august 2012 https://petewarden.com/2012/08/
july 2012 https://petewarden.com/2012/07/
june 2012 https://petewarden.com/2012/06/
may 2012 https://petewarden.com/2012/05/
april 2012 https://petewarden.com/2012/04/
march 2012 https://petewarden.com/2012/03/
february 2012 https://petewarden.com/2012/02/
january 2012 https://petewarden.com/2012/01/
december 2011 https://petewarden.com/2011/12/
november 2011 https://petewarden.com/2011/11/
october 2011 https://petewarden.com/2011/10/
september 2011 https://petewarden.com/2011/09/
august 2011 https://petewarden.com/2011/08/
july 2011 https://petewarden.com/2011/07/
june 2011 https://petewarden.com/2011/06/
may 2011 https://petewarden.com/2011/05/
april 2011 https://petewarden.com/2011/04/
march 2011 https://petewarden.com/2011/03/
february 2011 https://petewarden.com/2011/02/
january 2011 https://petewarden.com/2011/01/
december 2010 https://petewarden.com/2010/12/
november 2010 https://petewarden.com/2010/11/
october 2010 https://petewarden.com/2010/10/
september 2010 https://petewarden.com/2010/09/
august 2010 https://petewarden.com/2010/08/
july 2010 https://petewarden.com/2010/07/
june 2010 https://petewarden.com/2010/06/
may 2010 https://petewarden.com/2010/05/
april 2010 https://petewarden.com/2010/04/
march 2010 https://petewarden.com/2010/03/
february 2010 https://petewarden.com/2010/02/
january 2010 https://petewarden.com/2010/01/
december 2009 https://petewarden.com/2009/12/
november 2009 https://petewarden.com/2009/11/
october 2009 https://petewarden.com/2009/10/
september 2009 https://petewarden.com/2009/09/
august 2009 https://petewarden.com/2009/08/
july 2009 https://petewarden.com/2009/07/
june 2009 https://petewarden.com/2009/06/
may 2009 https://petewarden.com/2009/05/
april 2009 https://petewarden.com/2009/04/
march 2009 https://petewarden.com/2009/03/
february 2009 https://petewarden.com/2009/02/
january 2009 https://petewarden.com/2009/01/
december 2008 https://petewarden.com/2008/12/
november 2008 https://petewarden.com/2008/11/
october 2008 https://petewarden.com/2008/10/
september 2008 https://petewarden.com/2008/09/
august 2008 https://petewarden.com/2008/08/
july 2008 https://petewarden.com/2008/07/
june 2008 https://petewarden.com/2008/06/
may 2008 https://petewarden.com/2008/05/
april 2008 https://petewarden.com/2008/04/
march 2008 https://petewarden.com/2008/03/
february 2008 https://petewarden.com/2008/02/
january 2008 https://petewarden.com/2008/01/
december 2007 https://petewarden.com/2007/12/
november 2007 https://petewarden.com/2007/11/
october 2007 https://petewarden.com/2007/10/
september 2007 https://petewarden.com/2007/09/
august 2007 https://petewarden.com/2007/08/
july 2007 https://petewarden.com/2007/07/
june 2007 https://petewarden.com/2007/06/
may 2007 https://petewarden.com/2007/05/
april 2007 https://petewarden.com/2007/04/
march 2007 https://petewarden.com/2007/03/
december 2006 https://petewarden.com/2006/12/
november 2006 https://petewarden.com/2006/11/
october 2006 https://petewarden.com/2006/10/
september 2006 https://petewarden.com/2006/09/
august 2006 https://petewarden.com/2006/08/
pete warden's blog https://petewarden.com/
home https://petewarden.com/
about https://petewarden.com/about/
blog at wordpress.com. https://wordpress.com/?ref=footer_blog
pete warden's blog https://petewarden.com/
blog at wordpress.com. https://wordpress.com/?ref=footer_blog

Zdjęcia

Zdjęcia 16
Zdjęcia bez atrybutu ALT 7
Zdjęcia bez atrybutu TITLE 16
Korzystanie Obraz ALT i TITLE atrybutu dla każdego obrazu.

Zdjęcia bez atrybutu TITLE

https://petewarden.files.wordpress.com/2017/04/screen-shot-2017-04-26-at-12-33-36-pm.png?w=550
https://petewarden.files.wordpress.com/2017/04/screen-shot-2017-04-26-at-1-26-45-pm.png?w=550
https://petewarden.files.wordpress.com/2017/01/screen-shot-2017-01-03-at-9-58-12-am.png?w=550
https://petewarden.files.wordpress.com/2016/12/networks.png?w=550
https://petewarden.files.wordpress.com/2016/12/screen-shot-2016-12-28-at-6-28-16-pm.png?w=550
https://petewarden.files.wordpress.com/2016/09/screen-shot-2016-09-27-at-8-56-06-am.png?w=550
https://petewarden.files.wordpress.com/2016/09/screen-shot-2016-09-26-at-12-39-14-pm.png?w=550
https://petewarden.files.wordpress.com/2016/05/screen-shot-2016-05-16-at-5-54-04-pm.png?w=550
https://petewarden.files.wordpress.com/2016/05/asawb.png?w=550
https://1.gravatar.com/avatar/dc71e8edb4f8d768fd20ba0b3b734fbe?s=48&d=identicon&r=g
https://1.gravatar.com/avatar/776e32e89f3f1ec7384e4dac5330d671?s=48&d=identicon&r=g
https://0.gravatar.com/avatar/9cbf603d5f93133178367214f1e091b9?s=48&d=identicon&r=g
https://1.gravatar.com/avatar/dc71e8edb4f8d768fd20ba0b3b734fbe?s=48&d=identicon&r=g
https://2.gravatar.com/avatar/eefbc91ad1186cec28c1d5252a01e00f?s=48&d=identicon&r=g
https://sb.scorecardresearch.com/p?c1=2&c2=7518284&c3=&c4=&c5=&c6=&c15=&cv=2.0&cj=1
https://pixel.wp.com/b.gif?v=noscript

Zdjęcia bez atrybutu ALT

https://1.gravatar.com/avatar/dc71e8edb4f8d768fd20ba0b3b734fbe?s=48&d=identicon&r=g
https://1.gravatar.com/avatar/776e32e89f3f1ec7384e4dac5330d671?s=48&d=identicon&r=g
https://0.gravatar.com/avatar/9cbf603d5f93133178367214f1e091b9?s=48&d=identicon&r=g
https://1.gravatar.com/avatar/dc71e8edb4f8d768fd20ba0b3b734fbe?s=48&d=identicon&r=g
https://2.gravatar.com/avatar/eefbc91ad1186cec28c1d5252a01e00f?s=48&d=identicon&r=g
https://sb.scorecardresearch.com/p?c1=2&c2=7518284&c3=&c4=&c5=&c6=&c15=&cv=2.0&cj=1
https://pixel.wp.com/b.gif?v=noscript

Ranking:


Alexa Traffic
Daily Global Rank Trend
Daily Reach (Percent)









Majestic SEO











Text on page:

search pete warden's blog ever tried. ever failed. no matter. try again. fail again. fail better. menu skip to content homeabout how to label images quickly april 26, 2017 by pete warden in uncategorized 1 comment i’ve found collecting great data is a lot more important than using the latest architecture when you’re trying to get good results in deep learning, so ever since my jetpac days i’ve spent a lot of time trying to come up with good ways to refine my training sets. i’ve written or used a lot of different user interfaces custom designed for this, but surprisingly i’ve found that the stock finder window in os x has been the most productive! here is how i curated the flowers set of images that’s used in tensorflow for poets, and i’ve found i can sort through many thousands of images an hour using this approach. copy and decompress the images onto a folder on my os x machine. open the folder in the os x finder app, the normal file viewer. choose the ‘column’ view for the finder window, which is an icon in the top bar, the third from the left in the view choices. select the first image. you should now see a small preview picture in the right-hand column. move the mouse pointer over the right-hand edge of the window, until you see the cursor change into a ‘drag left/right’ icon. drag the right-hand side of the finder window out. you should see the image preview get larger. stop once the preview size is no longer growing. you should now have a window that looks like the image at the start of the post. there are a couple of ways of using this view. if i have a set of images that have been roughly sorted, but i want to do some quality control by weeding out pictures that are misclassified, i’ll use the up and down arrow keys to move through the images, look at each preview to quickly tell if it’s correct, and press the command and delete keys to remove it if not. after removing a photo, the selection automatically moves onto the next image, which is convenient. if i have a large set of photos i want to label as belonging to a set of categories, rather than just rejecting bad labels, then i’ll use a slightly more involved approach. the key is to use “tags” in os x (which used to be called labels). you can follow these instructions for setting up a keyboard shortcut to open the tags menu for an item, and then move through the files using the down keys, assigning tags as you go. unfortunately os x removed the ability to apply particular tags through a single keyboard shortcut, which used to be possible in older versions of the system, but this can still be an efficient way to label large sets of images. another approach i sometimes use to very quickly remove a small number of bad labels is to open a folder of images using the icon view in the finder, and then crank up the preview size slider in the bottom right corner of the window. you may have to select “view->arrange by->name” from the top menu to ensure that the enlarged icons all fit inside the window. i don’t find this as efficient for moving through every image as the column view, but if i want to quickly visually scan to find a few rogue images it’s very handy. i’ll usually just grab the scroll bar at the right hand side, or use mouse scroll to quickly look through the entire data set, and then click to select any that i want to remove. what i like about these approaches are that they are very lightweight, i don’t need to install any special software, and the speed of the preview loading in the finder beats any custom software that i’ve found, so i can run through a lot of images very fast. anyway, i hope you find them useful too, and do let me know your favorite labeling hacks in the comments or on twitter. why deep learning needs assembler hackers january 3, 2017 by pete warden in uncategorized 3 comments photo by daniel lopez take a look at this function: for (j = 0; j < n; j++) { for (i = 0; i < m; i++) { float total(0); for (l = 0; l < k; l++) { const size_t a_index = ((i * a_i_stride) + (l * a_l_stride)); const float a_value = a[a_index]; const size_t b_index = ((j * b_j_stride) + (l * b_l_stride)); const float b_value = b[b_index]; total += (a_value * b_value); } const size_t c_index = ((i * c_i_stride) + (j * c_j_stride)); c[c_index] = total; } } for something so simple, it turns out it’s amazingly hard for compilers to speed up without a lot of human intervention. this is the heart of the gemm matrix multiply function, which powers deep learning, and every fast implementation i know has come from old-school assembler jockeys hand-tweaking instructions! when i first started looking at the engineering side of neural networks, i assumed that i’d be following the path i’d taken on the rest of my career and getting most of my performance wins from improving the algorithms, writing clean code, and generally getting out of the way so the compiler could do its job of optimizing it. instead i spend a large amount of my time worrying about instruction dependencies and all the other hardware details that we were supposed to be able to escape in the 21st century. why is this? matrix multiplies are a hard case for modern compilers to handle. the inputs used for neural networks mean that one function call may require millions of operations to complete, which magnifies the latency impact of any small changes to the code. the access patterns are entirely predictable for a long period, but not purely linear, which doesn’t fit well with cache line algorithms as written in the naive way above. there are lots of choices about how to accumulate intermediate results and reuse memory reads, which will have different outcomes depending on the sizes of the matrices involved. all this means that an endangered species, hand-coding assembler experts, write all of the best implementations. gotoblas (which evolved into openblas) showed how much speed could be gained on intel cpus. eigen has had a lot of work put into it to run well on both x86 and arm with float, and gemmlowp is optimized for eight-bit on arm. even if you’re running on a gpu, scott gray (formerly at nervana, now at openai) has shown how much faster hand-coded solutions can be. this is important because it means that there’s a lot of work involved in getting good performance from new platforms, and there’s often a big gap between existing highly-optimized solutions and those ported from other architectures. this is visible for example with gemmlowp on x86, where the hand optimization is still a work in progress and so the speed still lags behind float alternatives right now. it’s also exciting, because the real-world performance of even most optimized libraries lags behind the theoretical limits of the hardware, so there are still opportunities to squeeze more speed out of them with some clever hacking. there are also exciting developments in fundamentally different approaches to the problem like the winograd algorithm. the good news is that if you’re an old-school assembler hacker there’s still an important place for you in the brave new world of deep learning, so i hope we can pull you in! rewriting tensorflow graphs with the gtt december 30, 2016 by pete warden in uncategorized 1 comment photo by stephen d. strowes one of the most interesting things about neural networks for me is that they’re programs you can do meaningful computation on. the most obvious example of that is automatic differentiation, but even after you’ve trained a model there are lots of other interesting transformations you can apply. these can be as simple as trimming parts of the graph that aren’t needed for just running inference, all the way to folding batch normalization nodes into precalculated weights, turning constant sub expressions into single nodes, or rewriting calculations in eight bit. many of these operations have been available as piecemeal python scripts inside the tensorflow codebase, but i’ve spent some time rewriting them into what i hope is a much cleaner and easier to extend c++ graph transform tool. as well as a set of predefined operations based on what we commonly need ourselves, i’ve tried to create a simple set of matching operators and other utilities to encourage contributors to create and share their own rewriting passes. i think there’s a lot of potential for computing on compute graphs, so i’m excited to hear what you can come up with! do cc me (@petewarden) on github too with any issues you encounter. ai and unreliable electronics (*batteries not included) december 29, 2016 by pete warden in uncategorized 1 comment picture by torley a few months ago i returned to my home town of cambridge to attend the first arm research summit. what was special about this conference was that it focused on introducing external researchers to each other, rather than pushing arm’s own agenda. they had invited a broad range of people they worked with, from academic researchers to driver engineers, and all we had in common was that we spent a lot of time working on the arm platform. this turned out fantastically for me at least, because it meant i had the chance to learn from experts in fields i knew nothing about. as such, it left my mind spinning a little, and so this post is a bit unusual! i’m trying to clarify gut feelings about the future with some actual evidence, so please bear with me as i work through my reasoning. one of my favorite talks was on energy harvesting by james myers. this table leapt out at me (apologies to james if i copied any of his figures incorrectly): energy harvesting rules of thumb: human vibration – 4µw/cm2 industrial vibration – 100µw/cm2 human temperature difference – 25µw/cm2 industrial temperature difference – 1 to 10 mw/cm2 indoor light – 10µw/cm2 outdoor light – 10mw/cm2 gsm rf – 0.1µw/cm2 wifi rf – 0.001µw/cm2 what this means in plain english is that you can expect to harvest four micro-watts (millionths of a watt or µw) for every square centimeter of a device relying on human vibration. a solar panel in sunlight could gather ten milliwatts (thousandths of a watt or mw) for every square centimeter. if you think about an old incandescent bulb, that burns forty watts, and even a modern cell phone probably uses a watt or so when it’s being actively used, so the power you can get from energy harvesting is clearly not enough for most applications. my previous post on smartphone energy consumption shows that even running an accelerometer takes over one twenty milliwatts, so clearly it’s hard to build devices that rely on these levels of power. why does that matter? i’m convinced that smart sensors are going to be massively important in the future, and that vision can’t work if they require batteries. i believe that we’ll be throwing tiny cheap devices up in the air like confetti to scatter around all the environments we care about, and they’ll result in a world we can intelligently interact with in unprecedented ways. imagine knowing exactly where pests are in a crop field so that a robot can manually remove them rather than indiscriminately spraying pesticides, or having stickers on every piece of machinery in a factory that listen to the sounds and report when something needs maintenance. these sort of applications will only work if the devices can last for years unattended. we can already build tiny chips that do these sort of things, but we can’t build batteries that can power them for anywhere near that long, and that’s unlikely to change soon. can the cloud come to our rescue? i’m a software guy, but everything i see in the hardware world shows that transmitting signals continuously takes a lot of energy. even with a protocol like ble sending data just a foot draws more than 10 milliwatts. there seems to be an enduring relationship between power usage and the distance you’re sending the data, with register access cheaper than sram, which is far cheaper than dram, which beats radio transmission. that’s why i believe our only hope for long-lived smart sensors is driving down the energy used by local compute to the point at which harvesting gives enough power to run useful applications. the good news is that existing hardware like dsps can perform a multiply-add for just low double-digit picojoules, and can access local sram to avoid the costs of dram. if you do the back of the envelope calculations, a small image network like inception v1 takes about 1.5 billion multiply-adds, so 20 picojoules * 1.5 billion gives a rough energy cost of 30 millijoules per prediction (or 30 milliwatts at 1 prediction per second). this is already an order of magnitude less energy than the equivalent work done on a general-purpose cpu, so it’s a good proof that it’s possible to dramatically reduce computational costs, even though it’s still too high for energy harvesting to work. that’s where another recurrent theme of the arm research conference started to seem very relevant. i didn’t realize how much of a problem keeping results reliable is as the components continue to shrink. increasingly large parts of the design are devoted to avoiding problems like rowhammer, where accesses to adjacent dram rows can flip bits, as onur mutlu explained. it’s not just memory that faces problems like these, cpus also need to be over-engineered to avoid errors introduced by current leakage and weirder quantum-level effects. i was actually very excited when i learned this, because one of the great properties of neural networks is that they’re very resilient in the face of random noise. if we’re going to be leaving an increasing amount of performance on the table to preserve absolute reliability for traditional computing applications, that opens the door for specialized hardware without those guarantees that will be able to offer increasingly better energy consumption. again, i’m a software engineer so i don’t know exactly what kinds of designs are possible, but i’m hoping that by relaxing constraints on the hardware the chip creators will be able to come up with order-of-magnitude improvements, based on what i heard at the conference. if we can drive computational energy costs down into the femtojoules per multiply-add, then the world of ambient sensors will explode. as i was writing, i ran across a new startup that’s using deep learning and microphones to predict problems with machinery, but just imagine when those, along with seismic, fire, and all sorts of other sensors are scattered everywhere, too simple to record data but smart enough to alert people when special conditions occur. i can’t wait to see how this process unfolds, but i’m betting unreliable electronics will be a key factor in making it possible. tensorflow for mobile poets september 27, 2016 by pete warden in uncategorized 32 comments in tensorflow for poets, i showed how you could train a neural network to recognize objects using your own custom images. the next step is getting that model into users’ hands, so in this tutorial i’ll show you what you need to do to run it in your own ios application. i’m assuming you’ve already completed tensorflow for poets, and so you should have docker installed and a tf_files folder in your home directory that contains a retrained_graph.pb file containing your model. if you don’t, you’ll need to work through that example to build your own network. you’ll find the screencast to accompany this tutorial above, or at https://www.youtube.com/watch?v=_bkzppniydo, which should help clarify the steps i’ll be walking you through. as a first step, open the docker quickstart terminal and start a new docker container using the latest docker image. this tutorial relies on some newer features of tensorflow, so the v0.8 image used for the original tf for poets won’t work. docker run -it -p 8888:8888 -v $home/tf_files:/tf_files \ tensorflow/tensorflow:nightly-devel you should find yourself in a new shell where the prompt begins with root@ and ends with a ‘#’, indicating you’re running inside the docker image. to make sure things are setup correctly, run this `ls -lah /tf_files/` and make sure that the retrained_graph.pb file appears. next, we’re going to make sure that the model is producing sane results at the start. here i’m using the default flower images to test, but if you have trained on custom categories substitute the image file with one of your own. the compilation process may take a few minutes too, so make sure that you have updated the virtualbox settings to take advantage of your machine’s memory and processors if things are running too slowly. cd /tensorflow/ bazel build tensorflow/examples/label_image:label_image bazel-bin/tensorflow/examples/label_image/label_image \ --output_layer=final_result \ --labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ --graph=/tf_files/retrained_graph.pb this should hopefully produce a sensible top label for your example, in the case of flowers with daisy at the top. we’ll be using this command to make sure we’re still getting sensible results as we do further processing on the model file to prepare it for use in a mobile app. mobile devices have limited amounts of memory, and apps need to be downloaded, so by default the ios version of tensorflow only includes support for operations that are common in inference and don’t have large external dependencies. you can see the list of supported ops in the tensorflow/contrib/makefile/tf_op_files.txt file. one of the operations that isn’t supported is decodejpeg, because the current implementation relies on libjpeg which is painful to support on ios and would increase the binary footprint. while we could write a new implementation that uses ios’s native image libraries, for most mobile applications we don’t need to decode jpegs because we’re dealing directly with camera image buffers. unfortunately the inception model we based our retraining on includes a decodejpeg operation. we normally bypass this by directly feeding the mul node that occurs after the decode, but on platforms that don’t support the operation you’ll see an error when the graph is loaded, even if the op is never called. to avoid this, the optimize_for_inference script removes all nodes that aren’t needed for a given set of input and output nodes. the script also does a few other optimizations that help speed, such as merging explicit batch normalization ops into the convolutional weights to reduce the number of calculations. here’s how you run it: bazel build tensorflow/python/tools:optimize_for_inference bazel-bin/tensorflow/python/tools/optimize_for_inference \ --input=/tf_files/retrained_graph.pb \ --output=/tf_files/optimized_graph.pb \ --input_names=mul \ --output_names=final_result this creates a new file at /tf_files/optimized_graph.pb. to check that it hasn’t altered the output of the network, run label_image again on the updated model: bazel-bin/tensorflow/examples/label_image/label_image \ --output_layer=final_result \ --labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ --graph=/tf_files/optimized_graph.pb you should see very similar results to the first time you ran label_image, since the underlying mathematical results should be preserved through the changes made to streamline it. the retrained model is still 87mb in size at this point, and that guarantees a large download size for any app that includes it. there are lots of ways to reduce download sizes, and i’ll cover those in more detail in other documentation, but there’s one very simple approach that’s a big help without adding much complexity. because apple distributes apps in .ipa packages, all of the assets are compressed using zip. usually models don’t compress well because the weights are all slightly different floating point values. you can achieve much better compression just by rounding all the weights within a particular constant to 256 levels though, while still leaving them in floating-point format. this gives a lot more repetition for the compression algorithm to take advantage of, but doesn’t require any new operators and only reduces the precision by a small amount (typically less than a 1% drop in precision). here’s how you call the quantize_graph script to apply these changes: bazel build tensorflow/tools/quantization:quantize_graph bazel-bin/tensorflow/tools/quantization/quantize_graph \ --input=/tf_files/optimized_graph.pb \ --output=/tf_files/rounded_graph.pb \ --output_node_names=final_result \ --mode=weights_rounded if you look on disk, the raw size of the rounded_graph.pb file is the same at 87mb, but if you right-click on it in the finder and choose “compress”, you should see it results in a file that’s only about 24mb or so. that reflects what size increase you’d actually see in a compressed .ipa on ios, or an .apk on android. to verify that the model is still working, run label_image again: bazel-bin/tensorflow/examples/label_image/label_image \ --output_layer=final_result \ --labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ --graph=/tf_files/rounded_graph.pb this time, i would expect that the results may have slightly more noticeable changes thanks to the effects of the quantization, but the overall size and order of the labels should still be the same. the final processing step we need to run is memory mapping. because the buffers holding the model weight values are 87mb in size, the memory needed to load these into the app can put a lot of pressure on ram in ios even before the model is run. this can lead to stability problems as the os can unpredictably kill apps that use too much memory. fortunately these buffers are read-only, so it’s possible to map them into memory in a way that the os can easily discard them behind the scenes when there’s memory pressure, avoiding the possibility of those crashes. to support this, we need to rearrange the model so that the weights are held in sections that can be easily loaded separately from the main graphdef, though they’re all still in one file. here is the command to do that: bazel build tensorflow/contrib/util:convert_graphdef_memmapped_format bazel-bin/tensorflow/contrib/util/convert_graphdef_memmapped_format \ --in_graph=/tf_files/rounded_graph.pb \ --out_graph=/tf_files/mmapped_graph.pb one thing to watch out for is that the file on disk is no longer a plain graphdef protobuf, so if you try loading it into a program like label_image that expects one, you’ll see errors. you need to load the model file slightly differently, which we’ll show in the ios example below. so far we’ve been running all these scripts in docker, since for demonstration purposes it’s a lot easier to run scripts there, because installing the python dependencies is a lot more straightforward on ubuntu than os x. now we’re going to switch to a native terminal so that we can compile an ios app that uses the model you’ve trained. you’ll need xcode 7.3 or later with the command line tools installed to build the app, which you can download from apple. you’ll also need brew, and automake to run the build script. to install it using brew, run this command: brew install automake once you have those, open up a new terminal window, download the tensorflow source (using `git clone https://github.com/tensorflow/tensorflow`) to a folder on your machine (replacing `~/projects/tensorflow` below with that location) and run the following commands to build the framework and copy your model files over: cd ~/projects/tensorflow tensorflow/contrib/makefile/build_all_ios.sh cp ~/tf_files/mmapped_graph.pb \ tensorflow/contrib/ios_examples/camera/data/ cp ~/tf_files/retrained_labels.txt \ tensorflow/contrib/ios_examples/camera/data/ open tensorflow/contrib/ios_examples/camera/camera_example.xcodeproj check the terminal to make sure that your compilation succeeded without errors, and then you should find the camera example project opened in xcode. this app shows a live feed of your camera, together with the labels for any objects it has recognized, so it’s a good demo project for testing out a new model. the terminal commands above should have copied the model files you need into the apps data folder, but you still need to let xcode know that it should include them in the app. to remove the default model files, go to the left-hand project navigator pane in xcode, select imagenet_comp_graph_label_strings.txt and tensorflow_inception_graph.pb in the data folder, and delete them, choosing “move to trash” when prompted. next, open a finder window containing the new model files, for example from the terminal like this: open tensorflow/contrib/ios_examples/camera/data drag `mmapped_graph.pb` and `retrained_labels.txt` from that finder window, into the data folder in the project navigator. make sure the “add to targets” is enabled for cameraexample in the dialog’s checkbox. this should let xcode know that it should include the files when you build the app, so if you see later errors about missing files, double-check these steps. we’ve got the files in the app, but we also need to update some other information. we need to update the name of the files to load, but also some other metadata about the size of the input images, the node names, and how to scale the pixel values numerically before feeding them in. to make those changes open cameraexampleviewcontroller.mm in xcode, and look for the model settings near the top of the file. replace them with the following block: // if you have your own model, modify this to the file name, and make sure // you've added the file to your app resources too. static nsstring* model_file_name = @"mmapped_graph"; static nsstring* model_file_type = @"pb"; // this controls whether we'll be loading a plain graphdef proto, or a // file created by the convert_graphdef_memmapped_format utility that wraps a // graphdef and parameter file that can be mapped into memory from file to // reduce overall memory usage. const bool model_uses_memory_mapping = true; // if you have your own model, point this to the labels file. static nsstring* labels_file_name = @"retrained_labels"; static nsstring* labels_file_type = @"txt"; // these dimensions need to match those the model was trained with. const int wanted_input_width = 299; const int wanted_input_height = 299; const int wanted_input_channels = 3; const float input_mean = 128.0f; const float input_std = 128.0f; const std::string input_layer_name = "mul"; const std::string output_layer_name = "final_result"; finally, plug in and select your ios device (this won’t run on the simulator because it needs a camera) and hit command+r to build and run the modified example. if everything has worked, you should see the app start, display the live camera feed, and begin showing labels from your training categories. to test it out, find an example of the kind of objects you’re trying to recognize, point the camera at it and see if it is able to give it the right label. if you don’t have any physical objects handy, try doing an image search on the web, and then point it at your computer display. congratulations, you’ve managed to train your own model and run it on a phone! as next steps, a lot of the same transformations can be used on android or for the raspberry pi, and for all sorts of other models available in tensorflow for everything from natural language processing to speech synthesis. i’m excited to see new apps emerge using the incredible capabilities of deep learning on device, so i can’t wait to see what you come up with! what are gpus, anyway? may 17, 2016 by pete warden in uncategorized 4 comments photo by mark, vicki, ellaura, and mason a good friend of mine just asked me “what are gpus?”. it came up because she’s a great digital artist who’s getting into vr, and the general advice she gets is “buy a pc with a video card that costs more than $350”. what makes that one component cost so much, why do we need them, and what do they do? to help answer that, i thought i’d try to give an overview aimed at non-engineers. graphics processing units were created to draw images, text, and geometry onto the screen. this means they’re designed very differently than the cpus that run applications. cpus need to be good at following very complex recipes of instructions so they can deal with all sorts of user inputs and switch between tasks rapidly. gpus are much more specialized. they only need to do a limited range of things, but each job they’re given can involve touching millions of memory locations in one go. to see the difference between the kind of programs that run on cpus and gpus, think about a cpu reading from a text box. the cpu will sit waiting for you to press a key, and as soon as you do it might need to look in a list to figure out if there’s an autocomplete entry, check the spelling, or move to the next box if you hit return. this is a complex set of instructions with a lot of decisions involved. by contrast, a typical gpu task would be drawing an image on-screen. a picture that’s 1,000 pixels wide and high has a million elements, and drawing it means moving all of those into the screen buffer. that’s a lot more work than just waiting for a key press, but it also involves a lot fewer decisions since you just need to move a large number of pixels from one place to another. the differences in the kinds of tasks that cpus and gpus need to do means that they’re designed in very different ways. cpus are very flexible and able to do a lot of complicated tasks involving decision-making. gpus are less adaptable but can operate on large numbers of elements at once, so they can perform many operations much faster. the way gpus achieve this flexibility is that they break their tasks into much smaller components that can be shared across a large set of many small processors running at once. because the jobs they’re being asked to do are simpler than cpus, it’s easy to automatically split them up like this. as an example you can imagine having hundreds of little processors, each of which is given a tile of an image to draw. by having them work in parallel, the whole picture can be drawn much faster. the key advantage of gpus is this scalability. they can’t do every job, but for the ones they can tackle, you essentially can just pack in more processors on the board to get faster performance. this is why video cards that are capable of handling the high resolutions and framerates you need for vr are more expensive, they have more (and individually faster) processors to handle those larger sizes as you go up in price. this scalability is harder to do on cpus because it’s much trickier to break up the logic needed to run applications into smaller jobs. this is a painfully simplified explanation i know, but i’m hoping to get across what makes gpus fundamentally different from cpus. if you have a task that involves a lot of computation but few decision points, then gpus are set up to parallelize that job automatically. this is clearest in graphics, but also comes up as part of deep learning, where there are similar heavy-lifting requirements across millions of artificial neurons. as moore’s law continues to fade, leaving cpu speeds to languish, these sort of parallel approaches will become more and more attractive. bossy girls, parser mcparseface, and why deep learning is not just another fad may 15, 2016 by pete warden in uncategorized 1 comment when i talk to people outside of google and the subject turns to neural networks i often encounter a lot of skepticism. anybody who’s been alive over the past two decades has seen a lot of technological fads appear in an explosion of hype and fade away without making much of a lasting impact. remember fuzzy logic, corba, or the semantic web? deep learning is different, and i believe this fervently because is that i’ve seen the approach deliver record-beating results in practical applications across an amazing variety of different problems. that’s why tensorflow is so important to me personally, because it’s a great platform to share some very down-to-earth tools that demonstrate convincingly how powerful the technique can be. that’s a big reason i’ve tried to build approachable tutorials for common needs like image recognition, so everyone has a chance to see it working for themselves. it’s also why i was over the moon to see another google research team release parsey mcparseface! this is a state of the art sentence parser that’s built using tensorflow. that might sound a bit esoteric, but parsing is one of the fundamental problems that computers need to tackle to understand written language. with this available, i’m starting to dream up all sorts of interesting applications i wouldn’t have been able to think about before. for instance i’d love to know what verbs and adjectives are most commonly applied to men and women in all sorts of different contexts. to illustrate my point, here’s a paragraph from a great article on why bossy is so gendered: finally, the most flexible approach is one that is much more labor intensive. it involves gathering a random sample of instances of bossy and then simply reading through all of them with our own eyes to determine who is being labelled bossy. this is the approach i took in my recent blog post. because of the amount of time involved, i looked at far fewer examples than any of the approaches i’ve discussed, but i also was able to classify instances that the above approaches would have missed. the graph below illustrates what i found, namely that bossy was applied to women and girls three times more frequently than it was to men and boys. … you might think to yourself, “but there’s only 101 examples! that’s so few!” this kind of attribution of an adjective to a subject is something an accurate parser can do automatically. rather than laboriously going through just a hundred examples, it’s easy to set up the parser mcparseface and run through millions of sentences. the parser isn’t perfect, but at 94% accuracy on one metric, it’s pretty close to humans who get 96%. even better, having the computer do the heavy lifting means that it’s possible to explore many other relationships in the data, to uncover all sorts of unknown statistical relationships in the language we use. there’s bound to be other words that are skewed in similar or opposite ways to ‘bossy’, and i’d love to know what they are! that’s just one example though. the reason i’m so excited about deep learning is that i can’t even imagine all the applications that have now become possible. download parser mcparseface yourself and give it a try on a problem you care about, i’d love to see what you come up with! post navigation « older posts follow @petewarden on twittermy tweetsrss - posts recent posts how to label images quickly why deep learning needs assembler hackers rewriting tensorflow graphs with the gtt ai and unreliable electronics (*batteries not included) tensorflow for mobile poets recent comments amit bhaduri on tensorflow for mobile poe… krystynak on how to label images quick… pete warden on tensorflow for mobile poe… amit bhaduri on tensorflow for mobile poe… krishna on how i ended up using s3 as my… archives april 2017 january 2017 december 2016 september 2016 may 2016 april 2016 march 2016 february 2016 november 2015 october 2015 september 2015 august 2015 may 2015 april 2015 march 2015 january 2015 december 2014 november 2014 october 2014 september 2014 august 2014 july 2014 june 2014 may 2014 april 2014 march 2014 february 2014 january 2014 december 2013 november 2013 october 2013 september 2013 august 2013 july 2013 june 2013 may 2013 april 2013 march 2013 february 2013 january 2013 november 2012 october 2012 august 2012 july 2012 june 2012 may 2012 april 2012 march 2012 february 2012 january 2012 december 2011 november 2011 october 2011 september 2011 august 2011 july 2011 june 2011 may 2011 april 2011 march 2011 february 2011 january 2011 december 2010 november 2010 october 2010 september 2010 august 2010 july 2010 june 2010 may 2010 april 2010 march 2010 february 2010 january 2010 december 2009 november 2009 october 2009 september 2009 august 2009 july 2009 june 2009 may 2009 april 2009 march 2009 february 2009 january 2009 december 2008 november 2008 october 2008 september 2008 august 2008 july 2008 june 2008 may 2008 april 2008 march 2008 february 2008 january 2008 december 2007 november 2007 october 2007 september 2007 august 2007 july 2007 june 2007 may 2007 april 2007 march 2007 december 2006 november 2006 october 2006 september 2006 august 2006 pete warden's blog footer menu homeabout blog at wordpress.com. ↑ pete warden's blog blog at wordpress.com. post to cancel


Here you find all texts from your page as Google (googlebot) and others search engines seen it.

Words density analysis:

Numbers of all words: 6037

One word

Two words phrases

Three words phrases

the - 6.71% (405)
and - 2.12% (128)
that - 1.86% (112)
you - 1.72% (104)
for - 1.57% (95)
all - 1.14% (69)
app - 0.99% (60)
low - 0.99% (60)
are - 0.98% (59)
ten - 0.96% (58)
file - 0.94% (57)
his - 0.93% (56)
this - 0.91% (55)
can - 0.88% (53)
ble - 0.86% (52)
out - 0.83% (50)
image - 0.8% (48)
int - 0.78% (47)
use - 0.78% (47)
graph - 0.71% (43)
tensorflow - 0.68% (41)
with - 0.68% (41)
label - 0.66% (40)
put - 0.6% (36)
but - 0.6% (36)
need - 0.56% (34)
our - 0.55% (33)
men - 0.53% (32)
here - 0.53% (32)
files - 0.53% (32)
per - 0.51% (31)
one - 0.48% (29)
model - 0.48% (29)
run - 0.48% (29)
work - 0.45% (27)
lot - 0.41% (25)
example - 0.41% (25)
very - 0.41% (25)
own - 0.4% (24)
how - 0.4% (24)
see - 0.4% (24)
art - 0.4% (24)
your - 0.38% (23)
have - 0.38% (23)
able - 0.36% (22)
tf_files - 0.36% (22)
any - 0.35% (21)
they - 0.35% (21)
train - 0.35% (21)
what - 0.33% (20)
from - 0.33% (20)
it’s - 0.33% (20)
them - 0.31% (19)
into - 0.31% (19)
about - 0.31% (19)
set - 0.31% (19)
labels - 0.3% (18)
there - 0.3% (18)
more - 0.3% (18)
than - 0.3% (18)
other - 0.3% (18)
should - 0.28% (17)
result - 0.28% (17)
because - 0.28% (17)
which - 0.28% (17)
too - 0.27% (16)
rough - 0.27% (16)
ever - 0.27% (16)
old - 0.27% (16)
ran - 0.27% (16)
new - 0.27% (16)
find - 0.27% (16)
using - 0.27% (16)
const - 0.27% (16)
images - 0.27% (16)
may - 0.27% (16)
now - 0.27% (16)
these - 0.25% (15)
trained - 0.25% (15)
build - 0.25% (15)
press - 0.25% (15)
just - 0.25% (15)
much - 0.25% (15)
hand - 0.25% (15)
make - 0.25% (15)
get - 0.25% (15)
size - 0.25% (15)
way - 0.25% (15)
view - 0.25% (15)
that’s - 0.23% (14)
move - 0.23% (14)
like - 0.23% (14)
came - 0.23% (14)
cpu - 0.23% (14)
ios - 0.23% (14)
load - 0.23% (14)
not - 0.23% (14)
through - 0.23% (14)
still - 0.22% (13)
pete - 0.22% (13)
input - 0.22% (13)
camera - 0.22% (13)
approach - 0.22% (13)
warden - 0.22% (13)
ram - 0.22% (13)
memory - 0.22% (13)
thing - 0.22% (13)
open - 0.22% (13)
learn - 0.22% (13)
i’m - 0.22% (13)
data - 0.22% (13)
label_image - 0.2% (12)
gpu - 0.2% (12)
when - 0.2% (12)
name - 0.2% (12)
2008 - 0.2% (12)
april - 0.2% (12)
2013 - 0.2% (12)
2011 - 0.2% (12)
2010 - 0.2% (12)
call - 0.2% (12)
2009 - 0.2% (12)
2014 - 0.2% (12)
different - 0.2% (12)
large - 0.2% (12)
here’s - 0.2% (12)
some - 0.2% (12)
time - 0.18% (11)
retrained - 0.18% (11)
over - 0.18% (11)
eight - 0.18% (11)
key - 0.18% (11)
was - 0.18% (11)
september - 0.18% (11)
come - 0.18% (11)
mean - 0.18% (11)
energy - 0.18% (11)
map - 0.18% (11)
also - 0.18% (11)
down - 0.18% (11)
every - 0.18% (11)
sure - 0.18% (11)
examples - 0.18% (11)
sort - 0.18% (11)
i’ve - 0.18% (11)
process - 0.18% (11)
2016 - 0.18% (11)
even - 0.18% (11)
try - 0.18% (11)
learning - 0.18% (11)
december - 0.18% (11)
deep - 0.18% (11)
why - 0.18% (11)
august - 0.17% (10)
hard - 0.17% (10)
october - 0.17% (10)
mul - 0.17% (10)
january - 0.17% (10)
point - 0.17% (10)
november - 0.17% (10)
window - 0.17% (10)
on. - 0.17% (10)
those - 0.17% (10)
bazel - 0.17% (10)
has - 0.17% (10)
applications - 0.17% (10)
march - 0.17% (10)
know - 0.17% (10)
2007 - 0.17% (10)
output - 0.17% (10)
only - 0.17% (10)
used - 0.17% (10)
older - 0.17% (10)
2012 - 0.17% (10)
results - 0.17% (10)
cpus - 0.17% (10)
two - 0.17% (10)
gpus - 0.17% (10)
problem - 0.15% (9)
look - 0.15% (9)
comment - 0.15% (9)
right - 0.15% (9)
most - 0.15% (9)
involve - 0.15% (9)
finder - 0.15% (9)
give - 0.15% (9)
float - 0.15% (9)
good - 0.15% (9)
watt - 0.15% (9)
post - 0.15% (9)
there’s - 0.15% (9)
then - 0.15% (9)
start - 0.15% (9)
don’t - 0.15% (9)
show - 0.15% (9)
final - 0.15% (9)
network - 0.15% (9)
small - 0.13% (8)
- 0.13% (8)
light - 0.13% (8)
let - 0.13% (8)
command - 0.13% (8)
take - 0.13% (8)
folder - 0.13% (8)
2015 - 0.13% (8)
mapped - 0.13% (8)
operation - 0.13% (8)
photo - 0.13% (8)
july - 0.13% (8)
/tensorflow/ - 0.13% (8)
february - 0.13% (8)
few - 0.13% (8)
weight - 0.13% (8)
mobile - 0.13% (8)
june - 0.13% (8)
will - 0.13% (8)
where - 0.13% (8)
side - 0.12% (7)
docker - 0.12% (7)
long - 0.12% (7)
fast - 0.12% (7)
writing - 0.12% (7)
running - 0.12% (7)
step - 0.12% (7)
means - 0.12% (7)
change - 0.12% (7)
perform - 0.12% (7)
optimized - 0.12% (7)
face - 0.12% (7)
script - 0.12% (7)
xcode - 0.12% (7)
power - 0.12% (7)
graphdef - 0.12% (7)
node - 0.12% (7)
raw - 0.12% (7)
include - 0.12% (7)
search - 0.12% (7)
remove - 0.12% (7)
uncategorized - 0.12% (7)
want - 0.12% (7)
device - 0.12% (7)
compress - 0.12% (7)
possible - 0.12% (7)
they’re - 0.12% (7)
speed - 0.12% (7)
tools - 0.1% (6)
top - 0.1% (6)
arm - 0.1% (6)
next - 0.1% (6)
been - 0.1% (6)
ways - 0.1% (6)
bossy - 0.1% (6)
weights - 0.1% (6)
operations - 0.1% (6)
project - 0.1% (6)
quickly - 0.1% (6)
sorts - 0.1% (6)
problems - 0.1% (6)
support - 0.1% (6)
follow - 0.1% (6)
select - 0.1% (6)
hardware - 0.1% (6)
poets - 0.1% (6)
draw - 0.1% (6)
task - 0.1% (6)
harvest - 0.1% (6)
install - 0.1% (6)
getting - 0.1% (6)
i’ll - 0.1% (6)
simple - 0.1% (6)
i’d - 0.1% (6)
neural - 0.1% (6)
you’re - 0.1% (6)
flower - 0.1% (6)
download - 0.1% (6)
terminal - 0.1% (6)
cost - 0.1% (6)
preview - 0.1% (6)
can’t - 0.1% (6)
parser - 0.1% (6)
job - 0.1% (6)
had - 0.1% (6)
million - 0.1% (6)
blog - 0.1% (6)
you’ll - 0.1% (6)
compute - 0.08% (5)
create - 0.08% (5)
world - 0.08% (5)
research - 0.08% (5)
rewriting - 0.08% (5)
going - 0.08% (5)
reduce - 0.08% (5)
bit - 0.08% (5)
common - 0.08% (5)
automatic - 0.08% (5)
home - 0.08% (5)
avoid - 0.08% (5)
though - 0.08% (5)
think - 0.08% (5)
harvesting - 0.08% (5)
things - 0.08% (5)
design - 0.08% (5)
faster - 0.08% (5)
special - 0.08% (5)
check - 0.08% (5)
hope - 0.08% (5)
again - 0.08% (5)
another - 0.08% (5)
ability - 0.08% (5)
comments - 0.08% (5)
needs - 0.08% (5)
inference - 0.08% (5)
apps - 0.08% (5)
would - 0.08% (5)
error - 0.08% (5)
assembler - 0.08% (5)
machine - 0.08% (5)
live - 0.08% (5)
many - 0.08% (5)
who - 0.08% (5)
important - 0.08% (5)
found - 0.08% (5)
2006 - 0.08% (5)
part - 0.08% (5)
first - 0.08% (5)
involved - 0.08% (5)
test - 0.08% (5)
kind - 0.08% (5)
picture - 0.08% (5)
without - 0.08% (5)
approaches - 0.08% (5)
predict - 0.08% (5)
it. - 0.08% (5)
could - 0.08% (5)
instruction - 0.08% (5)
networks - 0.08% (5)
human - 0.08% (5)
changes - 0.08% (5)
performance - 0.08% (5)
amount - 0.08% (5)
processors - 0.08% (5)
great - 0.08% (5)
we’re - 0.08% (5)
across - 0.08% (5)
engineer - 0.08% (5)
rest - 0.08% (5)
box - 0.07% (4)
she - 0.07% (4)
tasks - 0.07% (4)
plain - 0.07% (4)
objects - 0.07% (4)
static - 0.07% (4)
nsstring* - 0.07% (4)
hit - 0.07% (4)
difference - 0.07% (4)
having - 0.07% (4)
table - 0.07% (4)
dram - 0.07% (4)
fad - 0.07% (4)
platform - 0.07% (4)
milliwatts - 0.07% (4)
decision - 0.07% (4)
better - 0.07% (4)
mcparseface - 0.07% (4)
wait - 0.07% (4)
screen - 0.07% (4)
feed - 0.07% (4)
retrained_graph.pb - 0.07% (4)
help - 0.07% (4)
range - 0.07% (4)
imagine - 0.07% (4)
errors - 0.07% (4)
batteries - 0.07% (4)
decode - 0.07% (4)
processing - 0.07% (4)
daisy - 0.07% (4)
costs - 0.07% (4)
file. - 0.07% (4)
algorithm - 0.07% (4)
rounded_graph.pb - 0.07% (4)
reliable - 0.07% (4)
high - 0.07% (4)
above - 0.07% (4)
update - 0.07% (4)
phone - 0.07% (4)
devices - 0.07% (4)
rely - 0.07% (4)
compile - 0.07% (4)
sensors - 0.07% (4)
smart - 0.07% (4)
does - 0.07% (4)
tutorial - 0.07% (4)
uses - 0.07% (4)
following - 0.07% (4)
tags - 0.07% (4)
once - 0.07% (4)
instructions - 0.07% (4)
number - 0.07% (4)
code, - 0.07% (4)
left - 0.07% (4)
normal - 0.07% (4)
window, - 0.07% (4)
photos - 0.07% (4)
its - 0.07% (4)
implementation - 0.07% (4)
keys - 0.07% (4)
automatically - 0.07% (4)
sub - 0.07% (4)
rather - 0.07% (4)
nodes - 0.07% (4)
needed - 0.07% (4)
slightly - 0.07% (4)
multiply - 0.07% (4)
python - 0.07% (4)
software - 0.07% (4)
each - 0.07% (4)
app, - 0.07% (4)
icon - 0.07% (4)
custom - 0.07% (4)
menu - 0.07% (4)
excited - 0.07% (4)
line - 0.07% (4)
computation - 0.07% (4)
since - 0.07% (4)
2017 - 0.07% (4)
trying - 0.07% (4)
learning, - 0.07% (4)
well - 0.07% (4)
this, - 0.07% (4)
millions - 0.07% (4)
between - 0.07% (4)
you’ve - 0.07% (4)
require - 0.07% (4)
access - 0.07% (4)
87mb - 0.05% (3)
values - 0.05% (3)
column - 0.05% (3)
native - 0.05% (3)
total - 0.05% (3)
given - 0.05% (3)
quantize_graph - 0.05% (3)
similar - 0.05% (3)
loading - 0.05% (3)
inside - 0.05% (3)
same - 0.05% (3)
moving - 0.05% (3)
buffers - 0.05% (3)
size_t - 0.05% (3)
advantage - 0.05% (3)
recognize - 0.05% (3)
function - 0.05% (3)
dependencies - 0.05% (3)
compiler - 0.05% (3)
making - 0.05% (3)
lots - 0.05% (3)
leaving - 0.05% (3)
increasing - 0.05% (3)
door - 0.05% (3)
sizes - 0.05% (3)
steps - 0.05% (3)
yourself - 0.05% (3)
--image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg - 0.05% (3)
gemm - 0.05% (3)
includes - 0.05% (3)
something - 0.05% (3)
--labels=/tf_files/retrained_labels.txt - 0.05% (3)
--output_layer=final_result - 0.05% (3)
default - 0.05% (3)
categories - 0.05% (3)
bazel-bin/tensorflow/examples/label_image/label_image - 0.05% (3)
list - 0.05% (3)
files, - 0.05% (3)
parallel - 0.05% (3)
image. - 0.05% (3)
reason - 0.05% (3)
onto - 0.05% (3)
fundamental - 0.05% (3)
logic - 0.05% (3)
board - 0.05% (3)
might - 0.05% (3)
drag - 0.05% (3)
involves - 0.05% (3)
right-hand - 0.05% (3)
instance - 0.05% (3)
love - 0.05% (3)
user - 0.05% (3)
written - 0.05% (3)
training - 0.05% (3)
spent - 0.05% (3)
designed - 0.05% (3)
fail - 0.05% (3)
recent - 0.05% (3)
poets, - 0.05% (3)
posts - 0.05% (3)
poe… - 0.05% (3)
text - 0.05% (3)
control - 0.05% (3)
demo - 0.05% (3)
below - 0.05% (3)
setting - 0.05% (3)
current - 0.05% (3)
tensorflow/contrib/ios_examples/camera/data - 0.05% (3)
brew - 0.05% (3)
apply - 0.05% (3)
fortunately - 0.05% (3)
sets - 0.05% (3)
loaded - 0.05% (3)
program - 0.05% (3)
pixel - 0.05% (3)
in. - 0.05% (3)
card - 0.05% (3)
component - 0.05% (3)
complex - 0.05% (3)
images, - 0.05% (3)
general - 0.05% (3)
language - 0.05% (3)
convert_graphdef_memmapped_format - 0.05% (3)
out, - 0.05% (3)
after - 0.05% (3)
computer - 0.05% (3)
before - 0.05% (3)
optimize_for_inference - 0.05% (3)
less - 0.05% (3)
takes - 0.05% (3)
shows - 0.05% (3)
being - 0.05% (3)
order - 0.05% (3)
share - 0.05% (3)
calculations - 0.05% (3)
expect - 0.05% (3)
work. - 0.05% (3)
tried - 0.05% (3)
gives - 0.05% (3)
scripts - 0.05% (3)
ported - 0.05% (3)
multiply-add - 0.05% (3)
with! - 0.05% (3)
interesting - 0.05% (3)
graphs - 0.05% (3)
near - 0.05% (3)
enough - 0.05% (3)
everything - 0.05% (3)
inception - 0.05% (3)
applications. - 0.05% (3)
vibration - 0.05% (3)
already - 0.05% (3)
available - 0.05% (3)
warden's - 0.05% (3)
behind - 0.05% (3)
hear - 0.05% (3)
big - 0.05% (3)
place - 0.05% (3)
people - 0.05% (3)
electronics - 0.05% (3)
relationship - 0.05% (3)
far - 0.05% (3)
based - 0.05% (3)
solutions - 0.05% (3)
care - 0.05% (3)
be. - 0.05% (3)
believe - 0.05% (3)
working - 0.05% (3)
foot - 0.05% (3)
unreliable - 0.05% (3)
we’ll - 0.05% (3)
cheap - 0.05% (3)
transform - 0.05% (3)
actual - 0.05% (3)
conference - 0.05% (3)
out. - 0.03% (2)
decisions - 0.03% (2)
gather - 0.03% (2)
clearly - 0.03% (2)
graphics - 0.03% (2)
typical - 0.03% (2)
differently - 0.03% (2)
watts, - 0.03% (2)
box. - 0.03% (2)
makes - 0.03% (2)
easier - 0.03% (2)
post. - 0.03% (2)
sit - 0.03% (2)
deal - 0.03% (2)
figure - 0.03% (2)
screen. - 0.03% (2)
reading - 0.03% (2)
soon - 0.03% (2)
waiting - 0.03% (2)
longer - 0.03% (2)
asked - 0.03% (2)
batch - 0.03% (2)
created - 0.03% (2)
bad - 0.03% (2)
normalization - 0.03% (2)
match - 0.03% (2)
299; - 0.03% (2)
levels - 0.03% (2)
aren’t - 0.03% (2)
model, - 0.03% (2)
cameraexample - 0.03% (2)
them, - 0.03% (2)
xcode, - 0.03% (2)
got - 0.03% (2)
called - 0.03% (2)
parts - 0.03% (2)
(which - 0.03% (2)
128.0f; - 0.03% (2)
std::string - 0.03% (2)
drawing - 0.03% (2)
mine - 0.03% (2)
gpus, - 0.03% (2)
delete - 0.03% (2)
who’s - 0.03% (2)
gets - 0.03% (2)
tell - 0.03% (2)
consumption - 0.03% (2)
android - 0.03% (2)
example. - 0.03% (2)
image, - 0.03% (2)
finally, - 0.03% (2)
display - 0.03% (2)
begin - 0.03% (2)
constant - 0.03% (2)
moves - 0.03% (2)
video - 0.03% (2)
square - 0.03% (2)
times - 0.03% (2)
girls - 0.03% (2)
again. - 0.03% (2)
adjective - 0.03% (2)
hundred - 0.03% (2)
relationships - 0.03% (2)
lifting - 0.03% (2)
heavy - 0.03% (2)
github - 0.03% (2)
approach. - 0.03% (2)
applied - 0.03% (2)
james - 0.03% (2)
tackle - 0.03% (2)
copy - 0.03% (2)
women - 0.03% (2)
instances - 0.03% (2)
labor - 0.03% (2)
illustrate - 0.03% (2)
@petewarden - 0.03% (2)
amit - 0.03% (2)
worked - 0.03% (2)
attend - 0.03% (2)
turned - 0.03% (2)
architecture - 0.03% (2)
latest - 0.03% (2)
wordpress.com. - 0.03% (2)
researchers - 0.03% (2)
homeabout - 0.03% (2)
chance - 0.03% (2)
included) - 0.03% (2)
ended - 0.03% (2)
future - 0.03% (2)
bhaduri - 0.03% (2)
flowers - 0.03% (2)
clarify - 0.03% (2)
experts - 0.03% (2)
(*batteries - 0.03% (2)
copied - 0.03% (2)
sound - 0.03% (2)
little - 0.03% (2)
pane - 0.03% (2)
easy - 0.03% (2)
ones - 0.03% (2)
pack - 0.03% (2)
scalability - 0.03% (2)
larger - 0.03% (2)
handle - 0.03% (2)
jobs - 0.03% (2)
smaller - 0.03% (2)
centimeter - 0.03% (2)
mouse - 0.03% (2)
commonly - 0.03% (2)
fewer - 0.03% (2)
flexible - 0.03% (2)
break - 0.03% (2)
faster. - 0.03% (2)
elements - 0.03% (2)
operators - 0.03% (2)
automatically. - 0.03% (2)
seen - 0.03% (2)
temperature - 0.03% (2)
encounter - 0.03% (2)
appear - 0.03% (2)
fade - 0.03% (2)
sentence - 0.03% (2)
industrial - 0.03% (2)
amazing - 0.03% (2)
subject - 0.03% (2)
google - 0.03% (2)
become - 0.03% (2)
their - 0.03% (2)
comes - 0.03% (2)
mw/cm2 - 0.03% (2)
computing - 0.03% (2)
talk - 0.03% (2)
choose - 0.03% (2)
pixels - 0.03% (2)
commands - 0.03% (2)
billion - 0.03% (2)
old-school - 0.03% (2)
picojoules - 0.03% (2)
compilation - 0.03% (2)
updated - 0.03% (2)
libraries - 0.03% (2)
1.5 - 0.03% (2)
settings - 0.03% (2)
next, - 0.03% (2)
started - 0.03% (2)
optimization - 0.03% (2)
prediction - 0.03% (2)
model. - 0.03% (2)
clean - 0.03% (2)
relies - 0.03% (2)
prompt - 0.03% (2)
lags - 0.03% (2)
won’t - 0.03% (2)
matrix - 0.03% (2)
exciting - 0.03% (2)
ops - 0.03% (2)
supported - 0.03% (2)
c_index - 0.03% (2)
local - 0.03% (2)
isn’t - 0.03% (2)
increase - 0.03% (2)
b_value - 0.03% (2)
painful - 0.03% (2)
news - 0.03% (2)
version - 0.03% (2)
sensible - 0.03% (2)
sram - 0.03% (2)
fundamentally - 0.03% (2)
case - 0.03% (2)
compilers - 0.03% (2)
turns - 0.03% (2)
limited - 0.03% (2)
app. - 0.03% (2)
containing - 0.03% (2)
magnitude - 0.03% (2)
involved. - 0.03% (2)
write - 0.03% (2)
guarantees - 0.03% (2)
kinds - 0.03% (2)
hoping - 0.03% (2)
often - 0.03% (2)
drive - 0.03% (2)
chip - 0.03% (2)
specialized - 0.03% (2)
avoiding - 0.03% (2)
actually - 0.03% (2)
faces - 0.03% (2)
x86 - 0.03% (2)
cpus. - 0.03% (2)
intel - 0.03% (2)
preserve - 0.03% (2)
showed - 0.03% (2)
random - 0.03% (2)
choices - 0.03% (2)
increasingly - 0.03% (2)
existing - 0.03% (2)
impact - 0.03% (2)
code. - 0.03% (2)
computational - 0.03% (2)
inputs - 0.03% (2)
installed - 0.03% (2)
were - 0.03% (2)
modern - 0.03% (2)
seem - 0.03% (2)
doesn’t - 0.03% (2)
record - 0.03% (2)
algorithms - 0.03% (2)
those, - 0.03% (2)
continue - 0.03% (2)
components - 0.03% (2)
possible. - 0.03% (2)
factor - 0.03% (2)
while - 0.03% (2)
b_index - 0.03% (2)
efficient - 0.03% (2)
main - 0.03% (2)
field - 0.03% (2)
watch - 0.03% (2)
disk - 0.03% (2)
we’ve - 0.03% (2)
crop - 0.03% (2)
single - 0.03% (2)
easily - 0.03% (2)
piece - 0.03% (2)
window. - 0.03% (2)
things, - 0.03% (2)
fit - 0.03% (2)
last - 0.03% (2)
pressure - 0.03% (2)
machinery - 0.03% (2)
programs - 0.03% (2)
images. - 0.03% (2)
switch - 0.03% (2)
particular - 0.03% (2)
about, - 0.03% (2)
tensorflow/contrib/ios_examples/camera/data/ - 0.03% (2)
~/projects/tensorflow - 0.03% (2)
shortcut - 0.03% (2)
keyboard - 0.03% (2)
tiny - 0.03% (2)
folder, - 0.03% (2)
scatter - 0.03% (2)
gemmlowp - 0.03% (2)
transformations - 0.03% (2)
unfortunately - 0.03% (2)
later - 0.03% (2)
exactly - 0.03% (2)
brew, - 0.03% (2)
automake - 0.03% (2)
go. - 0.03% (2)
source - 0.03% (2)
ways. - 0.03% (2)
overall - 0.03% (2)
effects - 0.03% (2)
data, - 0.03% (2)
favorite - 0.03% (2)
external - 0.03% (2)
too, - 0.03% (2)
useful - 0.03% (2)
usage - 0.03% (2)
found, - 0.03% (2)
such - 0.03% (2)
loaded, - 0.03% (2)
hackers - 0.03% (2)
decodejpeg - 0.03% (2)
a_value - 0.03% (2)
directly - 0.03% (2)
feeding - 0.03% (2)
((i - 0.03% (2)
cheaper - 0.03% (2)
platforms - 0.03% (2)
a_index - 0.03% (2)
beats - 0.03% (2)
click - 0.03% (2)
usually - 0.03% (2)
models - 0.03% (2)
compressed - 0.03% (2)
floating - 0.03% (2)
achieve - 0.03% (2)
gtt - 0.03% (2)
precision - 0.03% (2)
compression - 0.03% (2)
.ipa - 0.03% (2)
apple - 0.03% (2)
point, - 0.03% (2)
sending - 0.03% (2)
entire - 0.03% (2)
bar - 0.03% (2)
scroll - 0.03% (2)
detail - 0.03% (2)
cover - 0.03% (2)
navigator - 0.03% (2)
of the - 0.58% (35)
in the - 0.45% (27)
at the - 0.38% (23)
need to - 0.31% (19)
to the - 0.3% (18)
that the - 0.27% (16)
lot of - 0.27% (16)
if you - 0.25% (15)
and the - 0.2% (12)
the model - 0.18% (11)
on the - 0.18% (11)
pete warden - 0.18% (11)
deep learning - 0.18% (11)
the app - 0.18% (11)
you can - 0.17% (10)
this is - 0.17% (10)
set of - 0.15% (9)
is that - 0.15% (9)
you should - 0.15% (9)
tensorflow for - 0.15% (9)
can be - 0.13% (8)
to see - 0.13% (8)
able to - 0.13% (8)
make sure - 0.13% (8)
the file - 0.13% (8)
warden in - 0.12% (7)
your own - 0.12% (7)
by pete - 0.12% (7)
in uncategorized - 0.12% (7)
to run - 0.12% (7)
so the - 0.12% (7)
it’s a - 0.12% (7)
all the - 0.12% (7)
for the - 0.12% (7)
there are - 0.12% (7)
that it - 0.1% (6)
to build - 0.1% (6)
and then - 0.1% (6)
up with - 0.1% (6)
all sorts - 0.1% (6)
of images - 0.1% (6)
into the - 0.1% (6)
model file - 0.1% (6)
that are - 0.1% (6)
you have - 0.1% (6)
using the - 0.1% (6)
with a - 0.1% (6)
we can - 0.1% (6)
one of - 0.1% (6)
a folder - 0.1% (6)
sorts of - 0.1% (6)
2016 by - 0.08% (5)
because it - 0.08% (5)
with the - 0.08% (5)
how to - 0.08% (5)
to label - 0.08% (5)
neural networks - 0.08% (5)
the finder - 0.08% (5)
that they - 0.08% (5)
a large - 0.08% (5)
the right - 0.08% (5)
sure that - 0.08% (5)
to make - 0.08% (5)
have a - 0.08% (5)
the image - 0.08% (5)
finder window - 0.08% (5)
which is - 0.08% (5)
come up - 0.08% (5)
them in - 0.08% (5)
for mobile - 0.08% (5)
because the - 0.08% (5)
you need - 0.07% (4)
take a - 0.07% (4)
just a - 0.07% (4)
model is - 0.07% (4)
energy harvesting - 0.07% (4)
that can - 0.07% (4)
means that - 0.07% (4)
bazel build - 0.07% (4)
the op - 0.07% (4)
the top - 0.07% (4)
millions of - 0.07% (4)
we need - 0.07% (4)
static nsstring* - 0.07% (4)
do the - 0.07% (4)
the files - 0.07% (4)
model files - 0.07% (4)
how you - 0.07% (4)
going to - 0.07% (4)
when i - 0.07% (4)
rather than - 0.07% (4)
what you - 0.07% (4)
and run - 0.07% (4)
they can - 0.07% (4)
uncategorized 1 - 0.07% (4)
the data - 0.07% (4)
const float - 0.07% (4)
to avoid - 0.07% (4)
that we - 0.07% (4)
from a - 0.07% (4)
what i - 0.07% (4)
trying to - 0.07% (4)
lot more - 0.07% (4)
deep learning, - 0.07% (4)
through the - 0.07% (4)
from the - 0.07% (4)
for an - 0.07% (4)
the most - 0.07% (4)
is the - 0.07% (4)
1 comment - 0.07% (4)
should see - 0.07% (4)
here’s a - 0.07% (4)
i’ve found - 0.07% (4)
large set - 0.05% (3)
needs a - 0.05% (3)
the next - 0.05% (3)
to remove - 0.05% (3)
unreliable electronics - 0.05% (3)
in your - 0.05% (3)
find the - 0.05% (3)
const int - 0.05% (3)
where the - 0.05% (3)
using this - 0.05% (3)
have been - 0.05% (3)
run it - 0.05% (3)
the hardware - 0.05% (3)
warden's blog - 0.05% (3)
it’s possible - 0.05% (3)
these sort - 0.05% (3)
sort of - 0.05% (3)
up with! - 0.05% (3)
so that - 0.05% (3)
photo by - 0.05% (3)
think about - 0.05% (3)
to open - 0.05% (3)
i believe - 0.05% (3)
i’m a - 0.05% (3)
through a - 0.05% (3)
i want - 0.05% (3)
amount of - 0.05% (3)
the kind - 0.05% (3)
mobile poe… - 0.05% (3)
a good - 0.05% (3)
i can’t - 0.05% (3)
the way - 0.05% (3)
so it’s - 0.05% (3)
but i’m - 0.05% (3)
of your - 0.05% (3)
that’s a - 0.05% (3)
pete warden's - 0.05% (3)
the weights - 0.05% (3)
and that - 0.05% (3)
is still - 0.05% (3)
--output_layer=final_result \ - 0.05% (3)
--labels=/tf_files/retrained_labels.txt \ - 0.05% (3)
--image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ - 0.05% (3)
the same - 0.05% (3)
but the - 0.05% (3)
build the - 0.05% (3)
load the - 0.05% (3)
we’re going - 0.05% (3)
the command - 0.05% (3)
check the - 0.05% (3)
results in - 0.05% (3)
to load - 0.05% (3)
bazel-bin/tensorflow/examples/label_image/label_image \ - 0.05% (3)
folder in - 0.05% (3)
\ --output_layer=final_result - 0.05% (3)
\ --labels=/tf_files/retrained_labels.txt - 0.05% (3)
\ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg - 0.05% (3)
the first - 0.05% (3)
up the - 0.05% (3)
like the - 0.05% (3)
label images - 0.05% (3)
the right-hand - 0.05% (3)
on tensorflow - 0.05% (3)
file to - 0.05% (3)
also need - 0.05% (3)
ways to - 0.05% (3)
of time - 0.05% (3)
of different - 0.05% (3)
them with - 0.05% (3)
for poets, - 0.05% (3)
in tensorflow - 0.05% (3)
the labels - 0.05% (3)
open the - 0.05% (3)
work in - 0.05% (3)
to quickly - 0.05% (3)
as you - 0.05% (3)
to get - 0.05% (3)
there’s a - 0.05% (3)
and do - 0.05% (3)
but if - 0.05% (3)
learning is - 0.05% (3)
at this - 0.05% (3)
kind of - 0.05% (3)
inside the - 0.05% (3)
as the - 0.05% (3)
possible to - 0.05% (3)
number of - 0.05% (3)
that is - 0.05% (3)
are lots - 0.05% (3)
i hope - 0.05% (3)
can do - 0.05% (3)
that they’re - 0.05% (3)
of other - 0.05% (3)
the preview - 0.05% (3)
of deep - 0.05% (3)
approach i - 0.05% (3)
i don’t - 0.05% (3)
advantage of - 0.05% (3)
the graph - 0.05% (3)
and so - 0.05% (3)
the app, - 0.05% (3)
a watt - 0.05% (3)
be able - 0.05% (3)
i’d love - 0.05% (3)
const size_t - 0.05% (3)
a great - 0.05% (3)
that you - 0.05% (3)
gpus are - 0.05% (3)
the approach - 0.05% (3)
why deep - 0.05% (3)
this means - 0.05% (3)
any of - 0.05% (3)
lots of - 0.05% (3)
about the - 0.05% (3)
over the - 0.05% (3)
for every - 0.05% (3)
how much - 0.05% (3)
much faster - 0.05% (3)
have your - 0.03% (2)
but we - 0.03% (2)
blog at - 0.03% (2)
own model, - 0.03% (2)
when you - 0.03% (2)
to update - 0.03% (2)
run this - 0.03% (2)
at wordpress.com. - 0.03% (2)
your machine - 0.03% (2)
applied to - 0.03% (2)
the size - 0.03% (2)
but also - 0.03% (2)
some other - 0.03% (2)
folder on - 0.03% (2)
the key - 0.03% (2)
the following - 0.03% (2)
is one - 0.03% (2)
\ tensorflow/contrib/ios_examples/camera/data/ - 0.03% (2)
set up - 0.03% (2)
the default - 0.03% (2)
should include - 0.03% (2)
xcode know - 0.03% (2)
this app - 0.03% (2)
it has - 0.03% (2)
data folder, - 0.03% (2)
tried to - 0.03% (2)
model files, - 0.03% (2)
project navigator - 0.03% (2)
run applications - 0.03% (2)
needed to - 0.03% (2)
you go - 0.03% (2)
to know - 0.03% (2)
i’m hoping - 0.03% (2)
for example - 0.03% (2)
involves a - 0.03% (2)
on how - 0.03% (2)
to men - 0.03% (2)
to handle - 0.03% (2)
warden on - 0.03% (2)
you don’t - 0.03% (2)
give it - 0.03% (2)
of memory - 0.03% (2)
rewriting tensorflow - 0.03% (2)
used on - 0.03% (2)
the difference - 0.03% (2)
that have - 0.03% (2)
cpus and - 0.03% (2)
about a - 0.03% (2)
the gtt - 0.03% (2)
you’re trying - 0.03% (2)
graphs with - 0.03% (2)
range of - 0.03% (2)
wait to - 0.03% (2)
why do - 0.03% (2)
what makes - 0.03% (2)
to give - 0.03% (2)
to draw - 0.03% (2)
onto the - 0.03% (2)
what are - 0.03% (2)
much more - 0.03% (2)
you come - 0.03% (2)
see what - 0.03% (2)
assembler hackers - 0.03% (2)
only need - 0.03% (2)
learning needs - 0.03% (2)
to test - 0.03% (2)
the cpu - 0.03% (2)
this to - 0.03% (2)
it’s easy - 0.03% (2)
faster. the - 0.03% (2)
= 299; - 0.03% (2)
they’re designed - 0.03% (2)
that run - 0.03% (2)
an example - 0.03% (2)
that it’s - 0.03% (2)
to your - 0.03% (2)
file that - 0.03% (2)
much faster. - 0.03% (2)
amit bhaduri - 0.03% (2)
kinds of - 0.03% (2)
299; const - 0.03% (2)
waiting for - 0.03% (2)
= 128.0f; - 0.03% (2)
ai and - 0.03% (2)
const std::string - 0.03% (2)
run on - 0.03% (2)
move to - 0.03% (2)
(*batteries not - 0.03% (2)
bhaduri on - 0.03% (2)
of those - 0.03% (2)
128.0f; const - 0.03% (2)
relationships in - 0.03% (2)
of instructions - 0.03% (2)
having the - 0.03% (2)
next step - 0.03% (2)
learning, so - 0.03% (2)
you in - 0.03% (2)
news is - 0.03% (2)
tensorflow graphs - 0.03% (2)
example of - 0.03% (2)
needed for - 0.03% (2)
that aren’t - 0.03% (2)
parts of - 0.03% (2)
the good - 0.03% (2)
fundamentally different - 0.03% (2)
lags behind - 0.03% (2)
still a - 0.03% (2)
it means - 0.03% (2)
it’s also - 0.03% (2)
of them - 0.03% (2)
also exciting - 0.03% (2)
with some - 0.03% (2)
batch normalization - 0.03% (2)
them into - 0.03% (2)
spent a - 0.03% (2)
had in - 0.03% (2)
researchers to - 0.03% (2)
for me - 0.03% (2)
chance to - 0.03% (2)
human vibration - 0.03% (2)
work through - 0.03% (2)
arm research - 0.03% (2)
not included) - 0.03% (2)
i’ve tried - 0.03% (2)
on what - 0.03% (2)
easier to - 0.03% (2)
to create - 0.03% (2)
excited to - 0.03% (2)
electronics (*batteries - 0.03% (2)
and unreliable - 0.03% (2)
can be. - 0.03% (2)
if you’re - 0.03% (2)
the up - 0.03% (2)
i’ll use - 0.03% (2)
images that - 0.03% (2)
to move - 0.03% (2)
press the - 0.03% (2)
to apply - 0.03% (2)
to use - 0.03% (2)
than just - 0.03% (2)
the start - 0.03% (2)
preview size - 0.03% (2)
so ever - 0.03% (2)
2017 by - 0.03% (2)
images quickly - 0.03% (2)
i’ve spent - 0.03% (2)
finder window, - 0.03% (2)
you see - 0.03% (2)
should now - 0.03% (2)
used to - 0.03% (2)
use to - 0.03% (2)
code, and - 0.03% (2)
of neural - 0.03% (2)
old-school assembler - 0.03% (2)
out of - 0.03% (2)
compilers to - 0.03% (2)
of work - 0.03% (2)
that one - 0.03% (2)
comments photo - 0.03% (2)
uncategorized 3 - 0.03% (2)
are that - 0.03% (2)
to select - 0.03% (2)
may have - 0.03% (2)
they are - 0.03% (2)
that i’ve - 0.03% (2)
needs assembler - 0.03% (2)
run through - 0.03% (2)
difference – - 0.03% (2)
temperature difference - 0.03% (2)
even if - 0.03% (2)
see an - 0.03% (2)
feeding the - 0.03% (2)
to reduce - 0.03% (2)
here’s how - 0.03% (2)
87mb in - 0.03% (2)
it. the - 0.03% (2)
label_image again - 0.03% (2)
model we - 0.03% (2)
don’t need - 0.03% (2)
ops in - 0.03% (2)
don’t have - 0.03% (2)
the ios - 0.03% (2)
operations that - 0.03% (2)
relies on - 0.03% (2)
on ios - 0.03% (2)
to support - 0.03% (2)
of ways - 0.03% (2)
in more - 0.03% (2)
a plain - 0.03% (2)
no longer - 0.03% (2)
here is - 0.03% (2)
into a - 0.03% (2)
you’ll see - 0.03% (2)
that uses - 0.03% (2)
ios app - 0.03% (2)
os can - 0.03% (2)
this can - 0.03% (2)
gives a - 0.03% (2)
weights are - 0.03% (2)
but there’s - 0.03% (2)
operators and - 0.03% (2)
than a - 0.03% (2)
still be - 0.03% (2)
size of - 0.03% (2)
this command - 0.03% (2)
this should - 0.03% (2)
the data, - 0.03% (2)
things, but - 0.03% (2)
work if - 0.03% (2)
that’s why - 0.03% (2)
good news - 0.03% (2)
1.5 billion - 0.03% (2)
for just - 0.03% (2)
can perform - 0.03% (2)
care about, - 0.03% (2)
smart sensors - 0.03% (2)
square centimeter - 0.03% (2)
rf – - 0.03% (2)
light – - 0.03% (2)
an old - 0.03% (2)
for most - 0.03% (2)
shows that - 0.03% (2)
energy consumption - 0.03% (2)
energy cost - 0.03% (2)
order of - 0.03% (2)
used for - 0.03% (2)
the docker - 0.03% (2)
should have - 0.03% (2)
you’re running - 0.03% (2)
docker image. - 0.03% (2)
take advantage - 0.03% (2)
and make - 0.03% (2)
poets, and - 0.03% (2)
mobile poets - 0.03% (2)
a software - 0.03% (2)
not just - 0.03% (2)
the arm - 0.03% (2)
to come - 0.03% (2)
based on - 0.03% (2)
can’t wait - 0.03% (2)
sensors are - 0.03% (2)
you’ll need - 0.03% (2)
pete warden in - 0.12% (7)
by pete warden - 0.12% (7)
all sorts of - 0.1% (6)
to make sure - 0.07% (4)
if you have - 0.07% (4)
make sure that - 0.07% (4)
a lot more - 0.07% (4)
you should see - 0.07% (4)
one of the - 0.07% (4)
uncategorized 1 comment - 0.07% (4)
in uncategorized 1 - 0.07% (4)
how to label - 0.05% (3)
\ --output_layer=final_result \ - 0.05% (3)
tensorflow for poets, - 0.05% (3)
--labels=/tf_files/retrained_labels.txt \ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg - 0.05% (3)
sure that the - 0.05% (3)
--output_layer=final_result \ --labels=/tf_files/retrained_labels.txt - 0.05% (3)
that can be - 0.05% (3)
to label images - 0.05% (3)
come up with! - 0.05% (3)
for mobile poe… - 0.05% (3)
on tensorflow for - 0.05% (3)
pete warden's blog - 0.05% (3)
i’d love to - 0.05% (3)
deep learning is - 0.05% (3)
we need to - 0.05% (3)
a watt or - 0.05% (3)
of deep learning - 0.05% (3)
why deep learning - 0.05% (3)
it’s possible to - 0.05% (3)
\ --image=/tf_files/flower_photos/daisy/5547758_eea9edfd54_n.jpg \ - 0.05% (3)
in the finder - 0.05% (3)
be able to - 0.05% (3)
are lots of - 0.05% (3)
in tensorflow for - 0.05% (3)
there are lots - 0.05% (3)
sorts of other - 0.03% (2)
the kind of - 0.03% (2)
you come up - 0.03% (2)
to see what - 0.03% (2)
if you don’t - 0.03% (2)
= 128.0f; const - 0.03% (2)
deep learning needs - 0.03% (2)
a plain graphdef - 0.03% (2)
and make sure - 0.03% (2)
you have your - 0.03% (2)
this to the - 0.03% (2)
which used to - 0.03% (2)
299; const int - 0.03% (2)
= 299; const - 0.03% (2)
should see the - 0.03% (2)
but i’m hoping - 0.03% (2)
graphs with the - 0.03% (2)
learning needs assembler - 0.03% (2)
you should now - 0.03% (2)
unreliable electronics (*batteries - 0.03% (2)
bhaduri on tensorflow - 0.03% (2)
blog at wordpress.com. - 0.03% (2)
you’re trying to - 0.03% (2)
lot of time - 0.03% (2)
of images that - 0.03% (2)
see what you - 0.03% (2)
large set of - 0.03% (2)
much of a - 0.03% (2)
used to be - 0.03% (2)
love to know - 0.03% (2)
any of the - 0.03% (2)
move through the - 0.03% (2)
relationships in the - 0.03% (2)
your own model, - 0.03% (2)
build the app, - 0.03% (2)
for mobile poets - 0.03% (2)
i can’t wait - 0.03% (2)
temperature difference – - 0.03% (2)
(*batteries not included) - 0.03% (2)
using the latest - 0.03% (2)
i’ve tried to - 0.03% (2)
and unreliable electronics - 0.03% (2)
you should find - 0.03% (2)
based on what - 0.03% (2)
to come up - 0.03% (2)
that it’s possible - 0.03% (2)
the good news - 0.03% (2)
of things, but - 0.03% (2)
parts of the - 0.03% (2)
of a watt - 0.03% (2)
will be able - 0.03% (2)
going to be - 0.03% (2)
that aren’t needed - 0.03% (2)
sure that you - 0.03% (2)
it should include - 0.03% (2)
xcode know that - 0.03% (2)
a folder on - 0.03% (2)
know that it - 0.03% (2)
should include the - 0.03% (2)
so if you - 0.03% (2)
every square centimeter - 0.03% (2)
to build the - 0.03% (2)
to load the - 0.03% (2)
of deep learning, - 0.03% (2)
tensorflow graphs with - 0.03% (2)
take advantage of - 0.03% (2)
good news is - 0.03% (2)
the weights are - 0.03% (2)
this is the - 0.03% (2)
model is still - 0.03% (2)
also need to - 0.03% (2)

Here you can find chart of all your popular one, two and three word phrases. Google and others search engines means your page is about words you use frequently.

Copyright © 2015-2016 hupso.pl. All rights reserved. FB | +G | Twitter

Hupso.pl jest serwisem internetowym, w którym jednym kliknieciem możesz szybko i łatwo sprawdź stronę www pod kątem SEO. Oferujemy darmowe pozycjonowanie stron internetowych oraz wycena domen i stron internetowych. Prowadzimy ranking polskich stron internetowych oraz ranking stron alexa.