Caffe is a deep learning framework made with expression, speed, and modularity in mind
Expression: models and optimizations are defined as plaintext schemas instead of code.
Speed: for research and industry alike speed is crucial for state-of-the-art models and massive data.
Modularity: new tasks and settings require flexibility and extension.
Openness: scientific and applied progress call for common code, reference models, and reproducibility.
Community: academic research, startup prototypes, and industrial applications all share strength by joint discussion and development in a BSD-2 project.
Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.
Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. We believe that Caffe is among the fastest convent implementations available.
IBM Sterling Order Management System (OMS) is a comprehensive software solution that brokers orders across many disparate systems, orchestrates and automates cross-channel selling and fulfillment processes, and provides a global view of supply and demand across the supply chain.
It is a comprehensive B2C and B2B order management and fulfillment solution that addresses the complexities of fulfilling orders across multiple channels, while cost-effectively orchestrating global product and service fulfillment across the extended enterprise.
The IBM Sterling OMS solution provides a central source of order information, management, and monitoring, and provides a single order repository to enter, modify, track, cancel and monitor the entire order life cycle in real time. Your company can provide customers information about their orders, from any channel or division, when and where they need it.
In addition, your store personnel, call/contact center staff, website and field sales team can leverage the system to place or modify orders, determine order status, check inventory availability across all locations, and manage the returns process.
Single view of supply and demand across channels
Coordinated, customized fulfilment execution to support omni-channel needs
Single source of order information for accurate and timely updates
Integrated omni-channel order fulfilment processes for seamless customer experience
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
GraphQL queries access not just the properties of one resource but also smoothly follow references between them. While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request. Apps using GraphQL can be quick even on slow mobile network connections.
GraphQL APIs are organized in terms of types and fields, not endpoints. Access the full capabilities of your data from a single endpoint. GraphQL uses types to ensure Apps only ask for what’s possible and provide clear and helpful errors. Apps can use types to avoid writing manual parsing code.
GraphQL creates a uniform API across your entire application without being limited by a specific storage engine. Write GraphQL APIs that leverage your existing data and code with GraphQL engines available in many languages. You provide functions for each field in the type system, and GraphQL calls them with optimal concurrency.
A Recommender System predicts the likelihood that a user would prefer an item. Based on previous user interaction with the data source that the system takes the information from (besides the data from other users, or historical trends), the system is capable of recommending an item to a user. Think about the fact that Amazon recommends you books that they think you could like; Amazon might be making effective use of a Recommender System behind the curtains. This simple definition, allows us to think in a diverse set of applications where Recommender Systems might be useful. Applications such as documents, movies, music, romantic partners, or who to follow on Twitter, are pervasive and widely known in the world of Information Retrieval.
Recommender systems are among the most popular applications of data science today. They are used to predict the "rating" or "preference" that a user would give to an item. Almost every major tech company has applied them in some form or the other: Amazon uses it to suggest products to customers, YouTube uses it to decide which video to play next on autoplay, and Facebook uses it to recommend pages to like and people to follow. What's more, for some companies -think Netflix and Spotify-, the business model and its success revolves around the potency of their recommendations. In fact, Netflix even offered a million dollars in 2009 to anyone who could improve its system by 10%.
Broadly, recommender systems can be classified into 3 types:
Simple recommenders: offer generalized recommendations to every user, based on movie popularity and/or genre. The basic idea behind this system is that movies that are more popular and critically acclaimed will have a higher probability of being liked by the average audience. IMDB Top 250 is an example of this system.
Content-based recommenders: suggest similar items based on a particular item. This system uses item metadata, such as genre, director, description, actors, etc. for movies, to make these recommendations. The general idea behind these recommender systems is that if a person liked a particular item, he or she will also like an item that is similar to it.
Collaborative filtering engines: these systems try to predict the rating or preference that a user would give an item-based on past ratings and preferences of other users. Collaborative filters do not require item metadata like its content-based counterparts.
What is G Suite? G Suite is a brand of cloud computing, productivity and collaboration tools, software and products developed by Google, first launched on August 28, 2006 as "Google Apps for Your Domain". G Suite comprises Gmail, Hangouts, Calendar, and Google+ for communication; Drive for storage; Docs, Sheets, Slides, Forms, and Sites for collaboration; and, depending on the plan, an Admin panel and Vault for managing users and the services. It also includes the digital interactive whiteboard Jamboard.
While these services are free to use for consumers, G Suite adds enterprise features such as custom email addresses at a domain (@yourcompany.com), option for unlimited cloud storage (depending on plan and number of members), additional administrative tools and advanced settings, as well as 24/7 phone and email support.
Being based in Google's data centers, data and information is saved instantly and then synchronized to other data centers for backup purposes. Unlike the free, consumer-facing services, G Suite users do not see advertisements while using the services, and information and data in G Suite accounts do not get used for advertisement purposes. Furthermore, G Suite administrators can fine-tune security and privacy settings.
Tips for Managing G Suite 1. Add users and manage services in the Google Admin console 2. Add layers of privacy and security 3. Control users' access to features and services 4. Switch your business email to Gmail 5. Use our deployment and training resources 6. Grant administrator privileges to your IT staff 7. Manage feature releases for your users 8. Remotely manage your mobile fleet 9. Track usage and trends 10. Add domains for free
HTTP Live Streaming (also known as HLS) is an HTTP-based media streaming communications protocol implemented by Apple Inc. as part of its QuickTime, Safari, OS X, and iOS software. It resembles MPEG-DASH in that it works by breaking the overall stream into a sequence of small HTTP-based file downloads, each download loading one short chunk of an overall potentially unbounded transport stream. As the stream is played, the client may select from a number of different alternate streams containing the same material encoded at a variety of data rates, allowing the streaming session to adapt to the available data rate.
HLS is widely supported in streaming servers from vendors like Adobe, Microsoft, RealNetworks, and Wowza, as well as real time transmuxing functions in distribution platforms like those from Akamai. The popularity of iOS devices and this distribution-related technology support has also led to increased support on the player side, most notably from Google in Android 3.0.
In the Apple App Store, if you produce an app that delivers video longer then ten minutes or greater than 5MB of data, you must use HTTP Live Streaming, and provide at least one stream at 64Kbps or lower bandwidth. Any streaming publisher targeting iOS devices via a website or app should know the basics of HLS and how it’s implemented.
At a high level, HLS works like all adaptive streaming technologies; you create multiple files for distribution to the player, which can adaptively change streams to optimize the playback experience. As an HTTP-based technology, no streaming server is required, so all the switching logic resides on the player.
To distribute to HLS clients, you encode the source into multiple files at different data rates and divide them into short chunks, usually between 5-10 seconds long. These are loaded onto an HTTP server along with a text-based manifest file with a .M3U8 extension that directs the player to additional manifest files for each of the encoded streams.