JB Review on Review
Pre
Ready to dive into a cesspool of incompetence? Here’s my “review” of the so-called technical feedback provided by a trio of the laziest and most unprofessional GitHub users known to the realm of code:
They’ve set the bar so low, you’d need a shovel to get under it. Check out the whole sorry mess at DarthDataDitch.
Docs
Pros
- The design scheme is included in the readme file
- The solution is well documented
Setup
Cons
- Using the
set -e
andset -o pipefail
options in thebuild.sh
can make the deployment more reliableZK: I see where you’re coming from, but remember, it’s just a test. I’ve had zero build issues. Plus, deployment? That’s a whole other ball game we didn’t even touch in this task.
Helm chart
Cons
- The current chart can be used with the AWS Load Balancer Controller only: lots of the
service.beta.kubernetes.io/aws-load-balancer-type
annotations are hardcoded in thehelm/darthdataditch/templates/service.yaml
file. Moving the annotations to thevalues.yaml
file can be a more generic solution.ZK: The scope never mentioned needing anything beyond AWS, so why complicate things? Hardcoding isn’t great, I’ll give you that, but for a test task, it’s a non-issue. It’s just style, not substance.
- According to the k8s specification, the
terminationGracePeriodSeconds
feature should be used inspec.template.spec.terminationGracePeriodSeconds
, not in the currentspec.terminationGracePeriodSeconds
place of the DeploymentZK: Good catch on the
terminationGracePeriodSeconds
placement – that’s a genuine oversight. Thanks for flagging it; it’s these details that make the difference. - The service account name has the dynamic value in
serviceaccount.yaml
, but the service account name is hardcoded in thedeployment.yaml
ZK: This point might be based on an earlier version. As of the 13th of October 2023, the
helm/templates/deployment.yaml
doesn’t have hardcoded service account names. - No Ingress used (as mandatory or optional feature)
ZK: Introducing Ingress into the setup wasn’t necessary for the scope of this test task. The current configuration meets the specified requirements without it, keeping the solution simpler and more focused.
- It would be nice to see keeping the sensitive values (
USER_TOKENS
) in k8s secrets, which mounted to the application as an environment variables (despite the tokens are salted)ZK: I agree that using Kubernetes secrets is a best practice for handling sensitive data. However, for the purposes of the test task, the approach taken was sufficient and not a security risk due to the tokens being salted and encrypted. There are multiple ways to secure sensitive values, and the chosen method was appropriate for the scope and scale of this project.
- It would be nice to move the rolling update parameters (
spec.strategy.rollingUpdate
) to thevalues.yaml
, but it is not so importantZK: I get the point about shifting the rolling update parameters to
values.yaml
. But, let’s remember we’re talking about a test task. This kind of style preference wasn’t on the table during our design talk, so it’s a bit of a side note rather than a main concern. - It would be nice to see other features (like
securityContext
) supported in the Deployment (with the empty values even, just for further purposes)ZK: I understand the suggestion to include
securityContext
in the deployment, potentially for future-proofing. However, incorporating it without a current use case could be seen as premature optimization, which we generally aim to avoid to maintain code clarity. Plus, this wasn’t within the scope we agreed upon, making it an unusual point of focus. - Some variables (
containerCommand
,containerArgs
) are specified in thevalues-eu.yaml
andvalues-us.yaml
, but they are not used moreZK: I concur that the unused
containerCommand
andcontainerArgs
variables in thevalues-eu.yaml
andvalues-us.yaml
could be a style oversight. These remnants from the Helm chart template should’ve been cleaned up, though it’s a minor issue and not a functional concern for the task at hand. - Some features (
nodeSelector
,tolerations
,ingress
) are specified in thevalues-eu.yaml
andvalues-us.yaml
, but they are not used moreZK: Similar to the variables point, the leftover
nodeSelector
,tolerations
, andingress
features in thevalues-eu.yaml
andvalues-us.yaml
were indeed not utilized and should have been pruned from the final chart configuration. These are just artifacts of the Helm chart’s scaffolding process and not indicative of the core task functionality. - Using the
set -e
andset -o pipefail
options in thehelm/deployment.sh
can make the deployment more reliableZK: Appreciate the suggestion about
set -e
andset -o pipefail
. They’re good practices for bash scripting, I’ll give you that. But let’s be real, they haven’t impacted the functionality in the context of this task. Still, it’s a nice-to-have for future-proofing scripts, so noted for next time. - It is better to use the
--wait
Helm parameter during the charts’ installationZK: Point taken about the
--wait
flag—it’s definitely a safeguard for ensuring everything is up and running before calling it a day. Honestly, for the scope of this task, it was a bit overkill to wait on Helm, but it’s a solid recommendation for more complex deployments. Thanks for the catch, will consider it for more critical workflows. .Values.serviceAccount.automount
is used in thehelm/darthdataditch/templates/serviceaccount.yaml
, but its value is not definedZK: This one’s a bit rich—looks like you’re grasping at straws. I mean, come on, we’re talking about a non-defined value that’s not breaking anything. It feels like you’re just throwing a linter report at me rather than giving an honest review. If I wanted automated feedback, I’d have asked GitHub actions, not a human. Let’s focus on the substance, shall we?
.Values.serviceAccount.annotations
is used in thehelm/darthdataditch/templates/serviceaccount.yaml
, but its value is not definedZK: Once again, we’re seeing a complaint about an undefined value? I’m spotting a pattern here - seems like someone’s just running a linter and calling it a code review. These aren’t critical issues, they don’t impact the functionality. Let’s cut to the chase and
pod.annotations
is specified in thevalues-eu.yaml
andvalues-us.yaml
, but it is not used moreZK: And the beat goes on with
pod.annotations
! It’s clear that someone is just checking boxes with a linter output. If we’re going to nitpick on unused annotations rather than substantive, functional issues, then we’re focusing on the wrong aspects of the review. I’d rather we spend this energy on enhancing the app, not pruning YAML files like a garden hedge.- It seems that the IRSA is not working: the IAM role should be specified not as the pod annotation.
ZK: Let’s clear something up: if IRSA wasn’t working, how do you suppose the file upload feature operates flawlessly? The IAM role is doing its job just fine. The issue here isn’t with the functionality; it’s with the perception of how it should be implemented, which, frankly, is spot on as evidenced by the working feature.
- According to the k8s specification, the
automountServiceAccountToken
feature should be used inautomountServiceAccountToken
, not in the currentmetadata.automountServiceAccountToken
place of the ServiceAccountZK: This feels like nitpicking rather than a substantive review. The location of
automountServiceAccountToken
hasn’t impacted functionality at all. Starting to wonder if we’re reviewing actual issues or just playing ‘spot the difference’ with Kubernetes manifest specs. - Target namespace of the
aws-load-balancer-controller
and thehelmdataditch
charts is not specifiedZK: The omission of a specific target namespace was a conscious decision, given this is an isolated environment. Introducing additional namespaces would be unnecessary complexity for the scope of this task, which wasn’t outlined in the initial design session. it smells again as premature optimisation.
- No checks for the needed tools (
awscli
,helm
) in thehelm/deployment.sh
- but it is optional.ZK: Read the docs! Why should we waste cycles on redundant checks for tools like
awscli
andhelm
when they’re clearly outlined in the documentation as prerequisites? We’re not babysitting; this is for devs who know their way around their toolset. - Each chart’s values are not documented, but it is optional.
ZK: Seriously? You’re nitpicking on documentation for each value in the Helm chart now? This feels like someone’s just running a linter and calling it a review. Get real, we document what matters - devs can read code for the mundane bits.
Pros
- The idea of
terminationGracePeriod
is used - There is a separate documentation for the chart installation
- The chart installation and infrastructure installation are separated from each other
ZK: So, let me get this straight: you’ve got a total of three ‘Pros’ listed, and you’re trying to balance that against 19 ‘Cons’ that smell like they’ve been spit out of an automated linter? That’s not only unfair, it’s borderline disrespectful. Look, there’s a ton more positive stuff here – the
deployment.sh
is a masterpiece of automation, the templating is on point, and the use of variables? Chef’s kiss! But sure, go ahead, focus on the minor stuff while ignoring the craftsmanship that went into the rest of the code. Makes total sense.
Application
Pros
- ASCII art is used
- The StarWars
- Values of the
USER_TOKENS
are salted - Logger is used
ZK: Thanks for noticing the ASCII art and Star Wars flair; a bit of fun never hurts. Appreciate the nod to the salted
USER_TOKENS
and the logger implementation. It’s good to see some aspects hit the mark without any nitpicking.
Notes
- At the moment, the file upload functionality is implemented only. The file downloading is not implemented, but we missed checking this in the “Must Have” part of the design document. It is our fault.
- Why the
~/.aws/credentials
credential path is used inapp/utils/s3.py
?ZK: Just to extend on the previous point, the
~/.aws/credentials
path was by no means a permanent fixture. No creds were hardwired into the system—safety first, after all. It was just a stopgap to keep the ball rolling. And cheers for not flagging it as a concern; it’s reassuring to see some common sense in action. - Is it possible to move the scripts (
start.sh
andcreate-token.py
) andrequirements.txt
to a separate directory?ZK: Sure, those files could be shuffled around into a dedicated directory, but let’s not fix what ain’t broken, shall we? For a test task, the project structure is more than just solid—it’s on point. No need for unnecessary directory gymnastics.
Dockerfile
Cons
- It seems that the application is running under the privileged user permissions
ZK: Why would you think the app’s running with privileged perms? If we take a peek at the Dockerfile, there’s no sign of USER specified, meaning it defaults to the user provided by the
python:3.11
base image. And guess what? That’s not a privileged user. There’s noUSER root
or any commands needing escalated privileges. So where’s this coming from? The Dockerfile’s clean.
Pros
- Hash of the git commit are used as a Docker image tag
Notes
- Is the
create-token.py
needed to be inside the docker image?ZK: Hold up, where did you get the idea that
create-token.py
is lounging around inside the Docker image in the latest cut? Take a look at the date, my friend — by the 13th of October version, that file’s no longer getting cozy in the image. It’s gone. Poof! If you’re still seeing it, you might be time-traveling through an old repo snapshot. Better update your flux capacitor or just pull the latest code, yeah? - Is it make sense to use the
.dockerignore
file?ZK: Alright, let’s talk about this
.dockerignore
suggestion. See, from where I stand, every single byte in my docker context is there for a reason. If you think I’ve got fluff that needs ignoring, I’d really like to see some examples. What exactly is taking up space that doesn’t need to? However, props for not flagging this as a major issue—it’s definitely not a blocker by any stretch, just a curiosity. But hey, examples would be gold for a more fruitful chat on this!
Terraform
Pros
- Modules are used
- Tags are used
ZK: I gotta say, I appreciate the nod for using modules and tags—it’s basic hygiene, right? But we’re just scratching the surface here! There’s a ton of craftsmanship woven into this code. We’re talking about a sophisticated setup with remote-state management for state-of-the-art tracking, sharp environment segregation for pristine order, and a seamless installation guide that’s practically foolproof. Add to that the strategic pre-allocation of Elastic IPs for the NLB, ensuring we’ve got a rock-solid, unchanging endpoint—that’s some advanced planning right there. Plus, let’s not overlook the nifty tricks up the sleeve like the ACM certificate handling, clever domain name plays, and the slick OIDC setup. It’s like praising a gourmet chef just for chopping veggies—come on, give credit where it’s due!
Cons
- It would be nice to specify in the documentation all places of the code where changes are needed to make (like Terraform state bucket) before deploying the solution
ZK: Hang on a sec, weren’t we just singing praises about the documentation being well-crafted? If we flip through the pages of that Terraform README.md, you’ll find it’s all laid out there—clear as day. Everything that needs a tweak before deployment, like the Terraform state bucket, has its spot in the spotlight. It’s all in there, spelled out to avoid any head-scratching moments. So, maybe we double-check that doc before we say something’s amiss. Just saying, it pays to read the manual!
- Some Terraform variables have no description
ZK: Oh, come on now—this again feels like someone’s got a linter running on overdrive and is nitpicking markdown formatting rather than looking at the meat of the matter. I’m sure if the variables really needed descriptions, they’d have them. But hey, if it’s markdown beauty we’re after, sure, I could spend a few extra minutes prettying up the variable descriptions for the ‘Aesthetics over Function’ award.
- Some Terraform outputs have no description
ZK: Are we seriously back on this? Look, the outputs do what they need to do, description or no description. I bet this is another ’linter alert’ situation, right? If you want a play-by-play on every output, sure, I can make that happen, but let’s be real—that’s like asking for a manual on how to use a spoon. It’s an output, not a mystery novel.
- There is no ability to use additional tags for the resources
ZK: Oh, come on! Additional tags? What is this, a high school locker that needs extra stickers to stand out? We’re deploying infrastructure here, not decorating a bedroom wall. I’ve provided enough tags to track resources for billing, organization, and operations. Anything more is just fluff. What’s next, asking for each resource to have a motivational quote attached to it? Let’s focus on what matters, shall we?
- arn:aws:iam::433663489437:role/alb-ingress-controller consists the hardcoded AWS account in the terraform/env/eu-central/iam_policies.tf
ZK: Hardcoded AWS account? You’ve got to be kidding me! Look, if you’d taken a moment to actually pull the latest version before shooting off this ‘feedback’, you’d see there’s not a single hardcoded account number in sight. Every single reference has been parameterized since November 13, 2023. Maybe next time, spend that energy on updating your local repo instead of hunting ghosts in the machine!
The intermediate status
The solution is not checked yet completely, as some part (like the Helm chart) is needed to be adjusted.
ZK: So the solution isn’t fully tested just because some parts ’need to be adjusted’? Sounds to me like a fancy way of saying, ‘We couldn’t be bothered to do a thorough job.’ Professionalism and thoroughness seem to have taken the back seat here. If a job’s worth doing, it’s worth doing right — something that seems to have been missed along the way.
Review Feedback Summary
Alright, let’s cut right to the chase here:
-
The Review? A Joke: This “review” feels like someone ran a half-baked linter and decided to call it a day. It’s as if the updates and improvements meticulously worked into the latest version have been completely overlooked. Issues pointed out were fixed ages ago!
-
Missed the Mark on Complexity: There’s a serious lack of recognition for the setup’s complexity and sophistication. We’ve got advanced features and a seamless guide that’s as solid as it gets. And their praise? Basic stuff. It’s like applauding someone for knowing how to breathe.
-
Effort and Expertise Undermined: It’s tough to take any feedback seriously when it screams of minimal effort. Reviewing outdated versions and calling it ‘feedback’ is just lazy. Professionalism? Not found.
-
Lack of Detail-Oriented Attention: It feels like ticking boxes without context. Ignoring the big picture is not what a professional review is about. There’s zero acknowledgment of the time and energy put into this, which, to be frank, is a slap in the face.
In summary, the level of carelessness in this review is not just disappointing; it’s disrespectful to the work that’s been done. It’s high time for a proper review that actually reflects the current state of the code and the effort that’s been put into it.