![]() ![]() Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days! See more With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy. We deployed this micro-service as AWS Lambda with Zappa. We implemented source data to target data translation by modelling target table structures through SQLAlchemy. Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS. Interestingly enough, MongoDB stitch offers integration with AWS services. We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. One of the services offered by MongoDB Stitch is Stitch Triggers. It is the serverless platform from MongoDB. We chose MongoDB Stitch for picking up the changes in the source database. The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilientīased on the above criteria, we selected the following tools to perform the end to end data replication: The data replication must be near real-time, yet it should NOT impact the production database We set ourselves the following criteria for the optimal tool that would do this job: If they can’t port their playlists over many potential new customers may be put off.Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence. If anyone has ideas or workarounds to share, please – I would think this issue is a big blocker for new users adopting Audirvana. There are too many playlists with too many tracks for it to be feasible to re-create them by hand in Audirvana. I thought to export them from Apple Music again and import them back into Audirvana but this isn’t working. This is a pretty big issue for me – Audirvana recently gave me a problem where all of my playlists are suddenly empty, i.e., they all contain zero tracks. I’m not sure if this happened concurrently with a recent OS X or Audirvana update…seems possible but I’m not certain. In the past I’ve exported playlists successfully out of iTunes (now Apple Music) and imported them into Audirvana, so this is a new issue. I looked at the M3U file in a text editor and it’s clean, the data is there and uncorrupted. I’m trying with M3U file type for the playlist. I saved one of my playlists out of Music, used the File > Import Playlist command in Audirvana, and nothing happens. I’ve also done a delete and clean re-install of Audirvana 3.5.37 (most recent version as I’m writing this post). ![]() I’m using a 2012 Mac Mini running Catalina 10.15.5, music files are stored on an external HDD. OK, checking in the same bug as others reported above…Importing playlists into Audirvana doesn’t work for me. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |