Introducing tf.vectorized_map

Chase Roberts
1 min readMay 7, 2019

One of the difficulties with writing tensorflow code is making sure all operations have the right tensor shape, especially when trying to include a batch dimension on the input data. A lot of the times, it’s actually much easier to write your functionality for a single element than it is to deal with a batch axis for every operation.

Taking inspiration from the simplicity of JAX’s vmap method and utilizing the awesome parallelization of the pfor method, we have created tf.vectorized_map. This method allows a user to add a batch dimension to any given computation function and run that computation fully in parallel.

If you have ever used tf.map_fn, the usage is basically the same, except tf.vectorized_map is waaaaayyyyyy faster (albeit, with more memory usage). Here, we show a >20x speedup of outer products of a thousand 32x32 matrices while running in eager mode.

The tf.vectroized_map method will be available in the nightly release of TensorFlow later today and it will also be included in the next full release.

Happy vectorizing ^-^

--

--