Partnership on Artificial Intelligence to Benefit People and Society

  • Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
  • To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.
  • The engagement of AI users and developers, as well as representatives of industry sectors that may be impacted by AI (such as healthcare, financial services, transportation, commerce, manufacturing, telecommunications, and media) to support best practices in the research, development, and use of AI technology within specific domains.
  • In support of the mission to benefit people and society, the Partnership on AI intends to conduct research, organize discussions, share insights, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.
  • The regular engagement of experts across multiple disciplines (including but not limited to psychology, philosophy, economics, finance, sociology, public policy, and law) to discuss and provide guidance on emerging issues related to the impact of AI on society.

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
Continue reading “Partnership on Artificial Intelligence to Benefit People and Society”

[1609.03677v1] Unsupervised Monocular Depth Estimation with Left-Right Consistency

Unsupervised Monocular Depth Estimation. Awesome red-eye read!  #deeplearning #depth #stereo

  • Abstract: Learning based methods have shown very promising results for the task of depth estimation in single images.
  • By exploiting epipolar geometry constraints, we generate disparity images by training our networks with an image reconstruction loss.
  • Most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training.
  • Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.
  • We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data.

Continue reading “[1609.03677v1] Unsupervised Monocular Depth Estimation with Left-Right Consistency”

Concrete AI safety problems

Concrete #AI safety problems

  • Advancing AI requires making AI systems smarter, but it also requires preventing accidents – that is, ensuring that AI systems do what people actually want them to do.
  • We think that broad AI safety collaborations will enable everyone to build better machine learning systems.
  • We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety .
  • Many of the problems are not new, but the paper explores them in the context of cutting-edge systems.
  • The paper explores many research problems around ensuring that modern machine learning systems operate as intended.

Read the full article, click here.


@RickKing16: “Concrete #AI safety problems”


We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around ensuring that modern machine learning systems operate as intended. (The problems are very practical, and we’ve already seen some being integrated into OpenAI Gym.)


Concrete AI safety problems