The Internet of Things has given objects that are episodically interacted with an enormous boost in efficiency, and additional potential for improvement. Many city-wide issues have been addressed with aggregate sensor networks that can be placed on already-existing technologies. For example, Smart Garbage Cans utilize sensors to keep track of how full they are, and optimize the trash collection route to save time and resources. One of the problems we were interested in was finding parking along the streets of New York City. Some cities already have solutions in place, such as a sensor for every available parking spot on a street that is light or weight sensitive. The problem is that this requires a tremendous number of sensors to be installed prior to any meaningful data collection. Instead, the usage of traffic cameras, with computer vision, would allow for the same sensing applications as a large sensor network, but also provide live event-driven monitoring. The goal of this work was to implement a Cloud-based full stack system, that would interact with the traffic cameras, interfacing with some embedded devices capable of accessing a network, and ultimately processing the data to generate some set of recommendations based on current parking. An object detecting algorithm was used with OpenCV, as well as a background difference-in-pixels approach, to detect and classify availability of parking. A machine learning model was then created, to interface with an iOS app that provided real-time feedback to the user.