At Walmart.com in the U.S. and at Walmart’s 11 other websites around the world, we provide seamless shopping experience where products are sold by:

  1. Own Merchants for Walmart.com & Walmart Stores
  2. Suppliers for Online & Stores
  3. Sellers on Walmart’s marketplaces
 

Product sold on walmart.com - Online, Stores by Walmart & by 3 marketplace sellers

The Process is referred to internally as “Item Setup” and the visitors to the sites see Product listings after data processing for Products, Offers, Price,Inventory & Logistics. These entities are comprised of data from multiple sources in different formats & schemas. They have different characteristics around data processing:

  1. Products requires more of data preparation around:
  • Normalization — This is standardization of attributes & values, aids in search and discovery
  • Matching — This is a slightly complex problem to match duplicates with imperfect data
  • Classification — This involves classification against Categories & Taxonomies
  • Content — This involves scoring data quality on attributes like Title, Description, Specifications etc. , finding & filling the “gaps” through entity extraction techniques
  • Images — This involves selecting best resolution, deriving attributes, detecting watermark
  • Grouping — This involves matching, grouping products based on variations, like shoes varying on Colors & Sizes
  • Merging — This involves selection of the best sources and data aggregation from multiple sources
  • Reprocessing — The Catalog needs to be reprocessed to pickup daily changes

2. Offers are made by Multiple sellers for same products & need to checked for correctness on:

  • Identifiers
  • Price variance
  • Shipping
  • Quantity
  • Condition
  • Start & End Dates

3. Pricing & Inventory adjustments many times of the day which need to be processed with very low latency & strict time constraints

4. Logistics has a strong requirement around data correctness to optimize cost & delivery

 

Modified Original with permission from Neha Narkhede

This yields architecturally to lots of decentralized autonomous services, systems & teams which handle the data “Before & After” listing on the site. As part of redesign around 2014 we started looking into building scalable data processing systems. I was personally influenced by this famous blog post “The Log: What every software engineer should know about real-time data’s unifying abstraction” where Kafka could provide good abstraction to connect hundreds of Microservices, Teams, and evolve to company-wide multi-tenant data hub. We started modeling changes as event streams recorded in Kafka before processing. The data processing is performed using a variety of technologies like:

  1. Stream Processing using Apache StormApache Spark
  2. Plain Java Program
  3. Reactive Micro services
  4. Akka Streams

The new data pipelines which was rolled out in phases since 2015 has enabled business growth where we are on boarding sellers quicker, setting up product listings faster. Kafka is also the backbone for our New Near Real Time (NRT) Search Index, where changes are reflected on the site in seconds.

 

Message Rate filtered for a Day, split Hourly

The usage of Kafka continues to grow with new topics added everyday, we have many small clusters with hundreds of topics, processing billions of updates per day mostly driven by Pricing & Inventory adjustments. We built operational tools for tracking flows, SLA metrics, message send/receive latencies for producers and consumers, alerting on backlogs, latency and throughput. The nice thing of capturing all the updates in Kafka is that we can process the same data for Reprocessing of the catalog, sharing data between environments, A/B Testing, Analytics & Data warehouse.

The shift to Kafka enabled fast processing but has also introduced new challenges like managing many service topologies & their data dependencies, schema management for thousands of attributes, multi-DC data balancing, and shielding consumer sites from changes which may impact business.

The core tenant which drove Kafka adoption where “Item Setup” teams in different geographical locations can operate autonomously has definitely enabled agile development. I have personally witnessed this over the last couple of years since introduction. The next steps are to increase awareness of Kafka internally for New & (Re)architecting existing data processing applications, and evaluate exciting new streaming technologies like Kafka Streams and Apache Flink. We will also engage with the Kafka open source community and the surrounding ecosystem to make contributions.

Apache Kafka for Item Setup的更多相关文章

  1. Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform-part 1

    转自: http://www.confluent.io/blog/stream-data-platform-1/ These days you hear a lot about "strea ...

  2. How-to: Do Real-Time Log Analytics with Apache Kafka, Cloudera Search, and Hue

    Cloudera recently announced formal support for Apache Kafka. This simple use case illustrates how to ...

  3. 实践部署与使用apache kafka框架技术博文资料汇总

    前一篇Kafka框架设计来自英文原文(Kafka Architecture Design)的翻译及整理文章,非常有借鉴性,本文是从一个企业使用Kafka框架的角度来记录及整理的Kafka框架的技术资料 ...

  4. Apache Kafka: Next Generation Distributed Messaging System---reference

    Introduction Apache Kafka is a distributed publish-subscribe messaging system. It was originally dev ...

  5. Install and Configure Apache Kafka on Ubuntu 16.04

    https://devops.profitbricks.com/tutorials/install-and-configure-apache-kafka-on-ubuntu-1604-1/ by hi ...

  6. Benchmarking Apache Kafka: 2 Million Writes Per Second (On Three Cheap Machines)

    I wrote a blog post about how LinkedIn uses Apache Kafka as a central publish-subscribe log for inte ...

  7. Flafka: Apache Flume Meets Apache Kafka for Event Processing

    The new integration between Flume and Kafka offers sub-second-latency event processing without the n ...

  8. Install and Configure Apache Kafka

    I. Installation The installation environment must have JDK, verify that you enter: java -version 1. ...

  9. Apache Kafka源码分析 – Broker Server

    1. Kafka.scala 在Kafka的main入口中startup KafkaServerStartable, 而KafkaServerStartable这是对KafkaServer的封装 1: ...

随机推荐

  1. 使用phpmyadmin修改XAMPP中MySQL的默认空密码

    XAMPP是开发php应用的一套完整的工具合集,就像安装软件一样安装,其他的都配置好了,不用自己再去繁琐的单独配置Apache.MySQL.php这几个模块了,以前我一直在使用的是Appserv,也是 ...

  2. ACM/ICPC 之 BFS范例(ZOJ2913-ZOJ1136(POJ1465))

    通过几道经典BFS例题阐述BFS思路 ZOJ2913-Bus Pass 题意:找一个center区域,使得center到所有公交线路最短,有等距的center则输出id最小的. 题解:经典的BFS,由 ...

  3. glib-2.49.4-msys-x86-staticLib.7z

    glib-2.49.4 MSYS 静态库 编译 export LIBFFI_CFLAGS=" -I/usr/local/lib/libffi-3.2.1/include " \ e ...

  4. yii框架详解 之 CWebApplication 运行流程分析

    在 程序入口处,index.php 用一句 Yii::createWebApplication($config)->run();  开始了app的运行. 那么,首先查看 CWebApplicat ...

  5. C#string类;math类;datetime类

    String类: .Length字符的长度   .Trim()去掉开头以及结尾的空格 .TrimStart()去掉字符串开头的空格 .TrimEnd()去掉字符串后面的空格   .ToUpper()全 ...

  6. mysql时间字符串按年/月/天/时分组查询

    SELECT DATE_FORMAT( deteline, "%Y-%m-%d %H" ) , COUNT( * ) FROM test GROUP BY DATE_FORMAT( ...

  7. Tomcat中文乱码问题的原理和解决方法

    1) 更改 D:\Tomcat\conf\server.xml,指定浏览器的编码格式为“简体中文”: 方法是找到 server.xml 中的 <Connector port="8080 ...

  8. 【leetcode】atoi (hard) ★

    虽然题目中说是easy, 但是我提交了10遍才过,就算hard吧. 主要是很多情况我都没有考虑到.并且有的时候我的规则和答案中的规则不同. 答案的规则: 1.前导空格全部跳过  “      123” ...

  9. 【processing】小代码

    今天无意间发现的processing 很有兴趣 实现很简洁 void setup(){ } void draw(){ background(); && mouseY > heig ...

  10. bootstrap添加时间控件

    $('#startTime').daterangepicker({ singleDatePicker: true,format:"YYYY-MM-DD HH:mm:ss",time ...