Okay, so today I’m gonna share my experience messin’ around with the “new eridu city fund best w engines” thing. Honestly, the name sounds way fancier than it actually is.
First off, I started by tryin’ to figure out what the heck a “w engine” even is in this context. Spent a solid hour just googling different variations of that phrase. Turns out, it’s mostly about finding the most efficient or powerful “engines” – code, strategies, whatever – to generate returns within this “new eridu city fund.” So, kinda vague, right? That’s where the fun began.

I decided to break it down. I figured the “fund” part meant there’s gotta be some data to analyze, some kind of inputs and outputs. So, I went hunting for any publicly available info on the fund itself. Found some reports, nothing super detailed, but enough to get a basic understanding of their investment areas – real estate, tech startups, a few infrastructure projects.
Next, I looked at the “engines” aspect. Since it’s not a literal engine, I thought about what kind of processes could generate value. Things like:
- Algorithmic trading bots: Could these be adapted to the fund’s investment areas?
- Automated data analysis pipelines: Could I build something to quickly identify promising investment opportunities?
- Community engagement tools: Could I use online platforms to find and vet potential startup ideas?
I chose to dive into the data analysis pipeline idea first. I knew Python, so I started by scraping publicly available data on real estate prices, startup funding rounds, and infrastructure project performance. It was messy, inconsistent data, but that’s half the battle, right?
I spent the next few days cleaning and organizing the data using Pandas. It was tedious, but I managed to create a somewhat usable dataset. Then, I started experimenting with different machine learning models – regression for predicting real estate prices, classification for identifying promising startups based on their funding history. Nothing fancy, just basic stuff using Scikit-learn.
The results? Meh. The predictions were okay-ish, but nothing groundbreaking. The startup classifier was even worse. It turns out, predicting success based solely on publicly available data is really hard. Who knew?
But, I did learn a few things. First, the data quality was a huge bottleneck. I needed better data sources. Second, the models were too simplistic. I needed to incorporate more factors, like market sentiment, expert opinions, and even geopolitical events. Basically, I needed to build a much more sophisticated “engine.”

So, what’s next? I’m thinkin’ about focusing on improving the data quality by scraping data from more specialized sources, maybe even paying for some premium datasets. I’m also going to try incorporating some natural language processing techniques to analyze news articles and social media posts related to the fund’s investment areas.
Look, I didn’t find the “best w engine” or anything even close to it. But, I did get my hands dirty, learned a bunch of new stuff, and had some fun along the way. And that’s what it’s all about, right? Stay tuned for the next update, maybe I’ll actually find somethin’ useful this time!