Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/1882
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCook, Joshua. ;en_US
dc.date.accessioned2013en_US
dc.date.accessioned2020-05-17T08:33:48Z-
dc.date.available2020-05-17T08:33:48Z-
dc.date.issued2017en_US
dc.identifier.isbn9781484230121 ;en_US
dc.identifier.isbn9781484230114 (print) ;en_US
dc.identifier.urihttp://localhost/handle/Hannan/1882-
dc.descriptionQA76en_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionSpringerLink (Online service) ;en_US
dc.descriptionPrinted edition: ; 9781484230114. ;en_US
dc.descriptionen_US
dc.descriptionen_US
dc.description.abstractLearn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller. It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit well into access memory or may require prohibitively long processing. These are significant challenges to skilled software engineers and they can render the standard Jupyter system unusable. As a solution to this problem, Docker for Data Science proposes using Docker. You will learn how to use existing pre-compiled public images created by the major open-source technologieseePython, Jupyter, Postgreseeas well as using the Dockerfile to extend these images to suit your specific purposes. The Docker-Compose technology is examined and you will learn how it can be used to build a linked system with Python churning data behind the scenes and Jupyter managing these background tasks. Best practices in using existing images are explored as well as developing your own images to deploy state-of-the-art machine learning and optimization algorithms. What You'll Learn: Master interactive development using the Jupyter platform Run and build Docker containers from scratch and from publicly available open-source images Write infrastructure as code using the docker-compose tool and its docker-compose.yml file type Deploy a multi-service data science application across a cloud-based system. ;en_US
dc.description.statementofresponsibilityby Joshua Cook.en_US
dc.description.tableofcontentsChapter 1: Introduction -- Chapter 2: Docker -- Chapter 3: Interactive Programming -- Chapter 4: Docker Engine -- Chapter 5: The Dockerfile -- Chapter 6: Docker Hub -- Chapter 7: The Opinionated Jupyter Stacks -- Chapter 8: The Data Stores -- Chapter 9: Docker Compose -- Chapter 10: Interactive Development. ;en_US
dc.format.extentXXI, 257 p. 97 illus., 76 illus. in color. ; online resource. ;en_US
dc.publisherApress :en_US
dc.publisherImprint: Apress,en_US
dc.relation.haspart9781484230121.pdfen_US
dc.subjectComputer Scienceen_US
dc.subjectComputersen_US
dc.subjectComputer Scienceen_US
dc.subjectBig Data. ;en_US
dc.subjectComputing Methodologies. ;en_US
dc.subjectOpen Source. ;en_US
dc.titleDocker for Data Scienceen_US
dc.title.alternativeBuilding Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server /en_US
dc.typeBooken_US
dc.publisher.placeBerkeley, CA :en_US
Appears in Collections:مهندسی فناوری اطلاعات

Files in This Item:
File Description SizeFormat 
9781484230121.pdf7.1 MBAdobe PDFThumbnail
Preview File
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCook, Joshua. ;en_US
dc.date.accessioned2013en_US
dc.date.accessioned2020-05-17T08:33:48Z-
dc.date.available2020-05-17T08:33:48Z-
dc.date.issued2017en_US
dc.identifier.isbn9781484230121 ;en_US
dc.identifier.isbn9781484230114 (print) ;en_US
dc.identifier.urihttp://localhost/handle/Hannan/1882-
dc.descriptionQA76en_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionSpringerLink (Online service) ;en_US
dc.descriptionPrinted edition: ; 9781484230114. ;en_US
dc.descriptionen_US
dc.descriptionen_US
dc.description.abstractLearn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller. It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit well into access memory or may require prohibitively long processing. These are significant challenges to skilled software engineers and they can render the standard Jupyter system unusable. As a solution to this problem, Docker for Data Science proposes using Docker. You will learn how to use existing pre-compiled public images created by the major open-source technologieseePython, Jupyter, Postgreseeas well as using the Dockerfile to extend these images to suit your specific purposes. The Docker-Compose technology is examined and you will learn how it can be used to build a linked system with Python churning data behind the scenes and Jupyter managing these background tasks. Best practices in using existing images are explored as well as developing your own images to deploy state-of-the-art machine learning and optimization algorithms. What You'll Learn: Master interactive development using the Jupyter platform Run and build Docker containers from scratch and from publicly available open-source images Write infrastructure as code using the docker-compose tool and its docker-compose.yml file type Deploy a multi-service data science application across a cloud-based system. ;en_US
dc.description.statementofresponsibilityby Joshua Cook.en_US
dc.description.tableofcontentsChapter 1: Introduction -- Chapter 2: Docker -- Chapter 3: Interactive Programming -- Chapter 4: Docker Engine -- Chapter 5: The Dockerfile -- Chapter 6: Docker Hub -- Chapter 7: The Opinionated Jupyter Stacks -- Chapter 8: The Data Stores -- Chapter 9: Docker Compose -- Chapter 10: Interactive Development. ;en_US
dc.format.extentXXI, 257 p. 97 illus., 76 illus. in color. ; online resource. ;en_US
dc.publisherApress :en_US
dc.publisherImprint: Apress,en_US
dc.relation.haspart9781484230121.pdfen_US
dc.subjectComputer Scienceen_US
dc.subjectComputersen_US
dc.subjectComputer Scienceen_US
dc.subjectBig Data. ;en_US
dc.subjectComputing Methodologies. ;en_US
dc.subjectOpen Source. ;en_US
dc.titleDocker for Data Scienceen_US
dc.title.alternativeBuilding Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server /en_US
dc.typeBooken_US
dc.publisher.placeBerkeley, CA :en_US
Appears in Collections:مهندسی فناوری اطلاعات

Files in This Item:
File Description SizeFormat 
9781484230121.pdf7.1 MBAdobe PDFThumbnail
Preview File
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCook, Joshua. ;en_US
dc.date.accessioned2013en_US
dc.date.accessioned2020-05-17T08:33:48Z-
dc.date.available2020-05-17T08:33:48Z-
dc.date.issued2017en_US
dc.identifier.isbn9781484230121 ;en_US
dc.identifier.isbn9781484230114 (print) ;en_US
dc.identifier.urihttp://localhost/handle/Hannan/1882-
dc.descriptionQA76en_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionen_US
dc.descriptionSpringerLink (Online service) ;en_US
dc.descriptionPrinted edition: ; 9781484230114. ;en_US
dc.descriptionen_US
dc.descriptionen_US
dc.description.abstractLearn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller. It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit well into access memory or may require prohibitively long processing. These are significant challenges to skilled software engineers and they can render the standard Jupyter system unusable. As a solution to this problem, Docker for Data Science proposes using Docker. You will learn how to use existing pre-compiled public images created by the major open-source technologieseePython, Jupyter, Postgreseeas well as using the Dockerfile to extend these images to suit your specific purposes. The Docker-Compose technology is examined and you will learn how it can be used to build a linked system with Python churning data behind the scenes and Jupyter managing these background tasks. Best practices in using existing images are explored as well as developing your own images to deploy state-of-the-art machine learning and optimization algorithms. What You'll Learn: Master interactive development using the Jupyter platform Run and build Docker containers from scratch and from publicly available open-source images Write infrastructure as code using the docker-compose tool and its docker-compose.yml file type Deploy a multi-service data science application across a cloud-based system. ;en_US
dc.description.statementofresponsibilityby Joshua Cook.en_US
dc.description.tableofcontentsChapter 1: Introduction -- Chapter 2: Docker -- Chapter 3: Interactive Programming -- Chapter 4: Docker Engine -- Chapter 5: The Dockerfile -- Chapter 6: Docker Hub -- Chapter 7: The Opinionated Jupyter Stacks -- Chapter 8: The Data Stores -- Chapter 9: Docker Compose -- Chapter 10: Interactive Development. ;en_US
dc.format.extentXXI, 257 p. 97 illus., 76 illus. in color. ; online resource. ;en_US
dc.publisherApress :en_US
dc.publisherImprint: Apress,en_US
dc.relation.haspart9781484230121.pdfen_US
dc.subjectComputer Scienceen_US
dc.subjectComputersen_US
dc.subjectComputer Scienceen_US
dc.subjectBig Data. ;en_US
dc.subjectComputing Methodologies. ;en_US
dc.subjectOpen Source. ;en_US
dc.titleDocker for Data Scienceen_US
dc.title.alternativeBuilding Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server /en_US
dc.typeBooken_US
dc.publisher.placeBerkeley, CA :en_US
Appears in Collections:مهندسی فناوری اطلاعات

Files in This Item:
File Description SizeFormat 
9781484230121.pdf7.1 MBAdobe PDFThumbnail
Preview File