save iterators and reuse them without invalidation risk

For a static STL container, the iterator objects can be safely stored and reused.

The dreaded iterator invalidation is a risk only under structural changes.

Many coding interview questions allow me save those iterators and store them in a vector, a hash table … Once we retrieve the iterator, we can visit the next/previous nodes.

Advertisements

##strongER trec taking up large codebases

I have grown from a sysAdmin to a dev
I have grown from web and scripting pro into a java pro then c++ pro !
Next, I hope to grow my competence with large codebase.

I feel large codebase is the #1 diffentiator separating the wheat from the chaff — “casual” vs hardcore coders.

With a large codebase, I tend to focus on the parts I don’t understand, regardless that’s 20% or 80% of the code I need to read. I can learn to live with that ambiguity. I guess Rahul was good at that.

In a few cases, within 3M I was able to “take up” a sizable brown field codebase and become somewhat productive. As I told Kyle, I don’t need bottom-up in-depth knowledge to be competent.

In terms of count of successes taking up sizable brown-field codebase, I am a seasoned contractor, so I would score more points than a typical VP or an average old timer in a given system.

  • eg: Guardian — don’t belittle the challenge and complexity
  • eg: mvea c++ codebase — My changes were localized, but the ets ecosystem codebase was huge
  • eg: mvea pspc apportionment/allocation logic + price matching logic
  • eg: mtg comm (small green field), integrated into a huge brown field codebase
  • eg: StirtRisk personalization — I made it work and manager was impressed
  • eg: StirtRisk bug fixes on GUI
  • eg: Macq build system — many GTD challenges
  • eg: Moodles at NIE
  • eg: Panwest website contract job at end of 2016
  • ~~~ the obvious success stories
  • eg: AICE — huge stored proc + perl
  • eg: Quest app owner
  • eg: error memos
  • eg: RTS

func overloading: pro^con #c++j..

Overloading is a common design tool in java and other languages, but more controversial in c++. As interviewer, I once asked “justification for using overload as a design tool”. I kinda prefer printInt/printStr/printPtr… rather than overloading on print(). I like explicit type differentiation, similar to [[safe c++]] advice on asEnum()/asString() conversion function.

Beside the key features listed below, the most common j4 is convenience | readability….. a on-technical justification!

— ADL — is a key c++ feature to support smart overload resolution

— TMP — often relies heavily on function overloading

— optional parameter and default arguments — unsupported in java so overloading is the alternative.

— visitor pattern — uses overloading. See https://wordpress.com/post/bintanvictor.wordpress.com/2115

— ctor and operator overloading — no choice. Unable to use differentiated function names

— C language — doesn’t support overloading. In a sense, overloading is non-essential.

Name mangling is a key ABI bridge from c++ to C

low_churn+low_stress domains

–high churn high expectation is worst but I have no experience

  • devops – churn due to integration with automation tools

–low churn high expectation

  • xp: Macq quant-dev domain
  • xp: PWM: SQL + Perl + java
  • xp: Qz

–high churn low expectation (low caliber)

  • xp: dotnet at OC
  • mediocre web shops

–low churn low expectation is ideal

  • xp: RTS socket + mkt data
  • xp: citi muni: bond math + coreJava

 

Q: what could derail work-till-70 plan

Q: what things can derail my work-till-75 plan. Let’s be open and include realistic and theoretical factors.

  • A: I think health is unlikely to be the derailer. In contrast, IV competition, age discrimination are more likely. The ruthless march of technology also means demand for my skillset will decline..
  • A: mental health? Look at GregM at RTS. See other blogposts on brain aging.
  • A: On the job, my energy, absorbency and sustained focus might go down (or up) with age. I wrote more in another blogpost — As I age, brown-field localSys will become harder to absorb. I may need to stay at one system longer.
    • On the other hand, if offered the chance to convert from contractor to FTE, I may need to resist and possibly move out.
  • A: On interviews, I think my QQ knowledge will remain competitive for many years.
  • A (pessimistic view): green field vs brown field — as I age, my capacity to handle green field may go down. My speed to learn brown field codebase may also go down but after I learn it, I may be able to retain the knowledge.
  • A1: #1 derailer is demand for my skills. In fact, beside doctors, Wall St tech might be one of the most enviable domains for work-till-70. Note “tech” also includes BAU, sysAdmin, projMgr, productMgr and other support functions.
  • A1b: Based on the rumor that west coast is more competitive and age-unfriendly, then the techies there in their 40’s may have more difficulty to remain hands-on like on Wall st. I have a natural bias towards WallSt contract market. If confirmed, then Wall st is better for older programmers.

##xp: enough localSys⇒GTD #but respect≠guaranteed

Did localSys effort save me? See my rather surgical tabulation analysis of past PIP + survivals

  1. GS — No, not a lot of respect, but I was able to hold my end of the log. With insufficient localSys I would get even less respect.
  2. Quest — No. The PIP was due to many factors. I feel my Quest GTD was adequate. With insufficient localSys, I would get even less respect.
  3. 🙂 RTS — yes
  4. 95G + volFitter — limited localSys .. nice green field projects for me.
  5. Macq — No. However, With insufficient localSys, I would have been kicked out within Y1

— Macq? I now feel my localSys at Macq was deeper than at Quest or RTS

  • I put in lots of effort and was able to diagnose most build errors on MSVS and Linux.
  • I needed lots of localSys knowledge to modernize codebase for MSVS-2015 and c++14.
  • I needed localSys + innovative on-the-fly code gen + other skills to add the logging decorator into pymodels. This hack alone is worth a bonus. This hack is more effective more valuable than all my hacks in OC.

However, I feel the expectation was too high so in the end I got PIP and felt like damaged good.

I need to challenge that negative impression of the entire episode.

frequent breaks@work → productivity drop

As I get older I believe my brain needs more frequent mini-breaks, but I think sometimes my breaks become distractions. The more treacherous “breaks” tend to be

  • blogging
  • personal investment — Eliminate !
  • generic tech learning, unrelated to current project

— sense of urgency — When taking these breaks, A sense of urgency is what I need, but I favor a relaxed sense of urgency.

— focus — frequent mini-breaks should not affect my focus at work. I might need to avoid chitchats.

I think it’s possible to maintain or enhance my focus by taking breaks.

— stay late — Frequent breaks often require longer hours. I should prefer coming early, but in reality I often stay late.

— I also want to avoid long sit-down sessions. The breaks give relief to my eyes, neck, back, shoulder etc.

##dry$valuable topics 4 high(absorbency)period #flight

This ranking was originally compiled for “in-flight edutainment”. Warning — If high churn, low accu, low traction, or short shelf life then the precious absorbency effort is wasted

See also the pastTechBet.xlsx listing 20+ tech topics and comparing their mkt-depth, demand-trend, churn-resistance … My conclusion from that analysis is — any non-trivial effort is either on a niche or a non-growing tech skill, with notable exceptions of coreJava+threading. All mainstream and churn-resistant skills need only trivial effort.

  1. coding drill — esp. the hot picks. Look at tag “top50”. Practice solving them again quickly.
    • 😦 low accu?
  2. java concurrency book by Lea. More valuable than c++ concurrency because java threading is a industry standard reference implementation and widely used.
  3. java tuning, effJava,
  4. c++ TMP?
    • 😦 seldom asked but might be asked more in high-end interviews. TMP is heavily used in real world libraries.
    • 😦 low traction as seldom used
  5. effModernC++
  6. linux kernel as halo?
    • 🙂 often needed in high-end interviews
    • 😦 low traction since I’m a few layers removed from the kernel internals
    • 😦 no orgro
  7. c++11 concurrency features?
    • 😦 low traction since seldom asked in-depth

## strategic TSN among%%abandoned

Q: name the most strategic trySomethingNew domains that I have since abandoned, given up, quit. How about the new plan to take on coding drills as a lifelong hobby?

Don’t spend too much time, because the answers are nothing new even though this is a decent question.

  1. — ranked by surprise
  2. algo trading? actually very few roles spread across a number of firms
  3. c#
  4. drv pricing quant
  5. real time risk as in GS, Qz and Athena
  6. RDBMS tuning
  7. MOM, async, message-driven design knowhow
  8. distributed cache like Coherence and Gemfire
  9. Solaris sys admin and DBA
  10. perl, SQL, Unix/Linux power-user knowledge? No longer a top 10 focus

—-

  • python? Not yet abandoned
  • web dev for dotcom? I did this for years then abandoned it. Many of the tech skills are still relevant like sessions, java thread safety, long-running jobs

hands-on dev beats mgr @same pay

BA, project mgr, even mid-level managers in some companies can earn the same 160k salary of a typical “developer” role. For a manager in a finance IT, salary is often higher, but for a statistically meaningful comparison I will use a 160k benchmark. Note in finance IT or tech firms, 160k is not high but on main street many developer positions pay below 160k.

As stated in other blogposts, at the same salary, developers enjoy higher mobility, more choices, higher career security…

##[19] cited strengths@java

In this post we compare c++, python, javascript, c#

  • [G3] Scalability and performance [1] – James Governor has a saying: “When web companies grow up, they become Java shops”.Java is built for scalability in mind, which is why it is so popular among enterprises and scaling startups. Twitter moved from Ruby to Java for scaling purposes.
  • [G9] community support [1] such as stackoverflow —
  • [G9] portability [1] — on Linux and Android.
  • [G9] versatile — For web, batch jobs, and server-side. MS is a java shop, using java for trading. but DDAM is not a typical MS app. DDAM has many batch jobs but the UI is all in web java.
    • python and c++ are also versatile
  • [G5] Java has high correlation with fashionable technologies — hadoop; cloud; big data; microservices… Python and javascript are also in the league.
  • [G3] proven —
    • web apps are the biggest market segment. Some (js/php/ruby) of the top 10 most popular languages are used exclusive for web. Java is more proven than c#, python, c++.
    • enterprise apps (complex business logic + DB) are my primary focus. java is more proven than python, javascript, php, c#
  • [G3=a top-3 strength]

[1] https://stackify.com/popular-programming-languages-2018/ explains Java’s popularity

 

criticalMass[def]against churn ] tech IV

See also body-building impact{c++jobs#^self-xx

IV knowledge Critical mass (eg: in core java self-study) is one of the most effective strategies against technology churn in tech interviews. Once I accumulate the critical mass, I don’t need a full time job to sustain it.

I have reached critical mass with core java IV, core c++ IV, swing IV (no churn) and probably c# IV.

The acid test is job interviews over a number of years.

Q: how strongly is it (i.e. critical mass) related to accumulation?
A: not much AFTER you accumulate the critical mass. With core java I did it through enough interviews and reading.

Q: how strongly is it related to leverage?
A: not much though Critical mass enhances leverage.

Q: why some domains offer no critical mass?
A: some (jxee) interviews topics have limited depth
A: some (TMP, py) interview topics have No pattern I could identify from interview questions.

 

##[18] 4 qualities I admire ] peers #!!status

I ought to admire my peers’ [1] efforts and knowledge (not their STATUS) on :

  1. personal wellness
  2. parenting
  3. personal finance, not only investment and burn rate
  4. mellowness to cope with the multitude of demands, setbacks, disappointments, difficulties, realities about the self and the competition
  5. … to be Compared to
    • zbs, portable GTD, not localSys
    • how to navigate and cope with office politics and big-company idiosyncrasies.

Even though some of my peers are not the most /accomplished/ , they make a commendable effort. That attitude is admirable.

[1] Many people crossing my path are … not really my peers, esp. those managers in China. Critical thinking required.

I don’t have a more descriptive title..

shortest path:2nodes]binary matrix #BFT

Q: given 2 cells in a binary matrix (1=black, 0=white=blocked), check the pair are connected and if yes return the shortest path. There exists a path of length 1 between any 2 cells IFF both are side by side or stack atop.

count paths between 2 bTree nodes #PimcoQ9 Ashish is arguably harder than this problem, but this problem allows moving in four directions.

binary-matrix island count #DeepakM technique is more applicable. A BFT path should work.

  • every reachable node is painted Green (like 2)
  • we give up after our queue is empty

https://github.com/tiger40490/repo1/blob/py1/py/grid/classic_connectedPair.py is the implementation, briefly tested.

longest consecutive ints]O(N) #zebra

Popularity — 1000+ likes on Leetcode … possibly popular

Q(Leetcode #128): Given an unsorted array of integers, find the longest consecutive element sequence, in O(N) time. Eg: given [100, 4, 200, 1, 3, 2] return [1,2,3,4]

I call this the zebra problem because  every consecutive sequence of int is a black stripe and the gaps between them are white stripes. We want the widest black stripe. Obviously, each stripe has minimum size 1.

https://github.com/tiger40490/repo1/blob/py1/py/array/zebra.py is my O(N) solution, not tested on Leetcode.

========

What’s UnionFind? A reusable technique?

Like inserting interval #merging #80% done, I  feel this is a data structure problem,

To keep things simple, i will first run one iteration to remove all duplicate items.

I will use hashtable where key a known item. The value is a pointer to a “segment” object.

A segment stores the min and max values. All integers within [min, max] of the segment are always known-items during my scan of input array.

When a new item is either min-1 or max+1, we expand the segment by adjusting the extremes…

The trick is joining two segments, without link pointers. After joining, we don’t really adjust the min/max fields. We only update the max-length global variable if needed.

To keep the hashtable small, I can optionally delete from it but we don’t want to do a range delete within the loop — O(NN)

specify(by ip:port) multicast group to join

http://www.nmsl.cs.ucsb.edu/MulticastSocketsBook/ has zipped sample code showing

mc_addr.sin_port = thePort;

bind(sock, (struct sockaddr *) &mc_addr, sizeof(mc_addr) ) // set the group port, not local port!
—-
mc_req.imr_multiaddr.s_addr = inet_addr(“224.1.2.3”);

setsockopt(sock, IPPROTO_IP, IP_DROP_MEMBERSHIP,
(void*) &mc_req, sizeof(mc_req) // set the IP by sending a IGMP join-request

Note setsocopt() actually sends a request!

====That’s for multicast receivers.  Multicast senders use a simpler procedure —

mc_addr.sin_addr.s_addr = inet_addr(“224.1.2.3”);
mc_addr.sin_port = htons(thePort);

sendto(sock, send_str, send_len, 0, (struct sockaddr *) &mc_addr, …

## any unexpected success@tsn@@

## past vindicative specializations is more comprehensive, but in this blogpost I want to start on a clean slate and answer the question —

Q: I have explored far and wide… for new domains, new challenges, so are there any unexpected successes of try-something-new?

skill discovered proven over %%strength 1-5 val 4 GTD val4IV
unix shell + unix admin 2000 Catcha 2Y 3 4 1
! perl 2000 Catcha 3Y 4 #many years ago 2 1
LAMP+js 2000 1Y 2 0 0
! python #popularity no surprise 1Y 2 #xp@Mac 2 3
! socket #and tools 2017 never 1 #lowLevel 3 if relevant 2
!! threading concept+java impl 2010 4Y 5 #theoretical 0 5
! x-lang collections 2010 5Y 4 #lowLevel+theoretical 1 5
! x-lang OO 2010 NA 4 #lowLevel 0 4
white board coding [1] 2011 2Y 2 @WallSt 0 3
c++instrumentation/build tools
! bond math 2010 Citi 1Y 2 1 if relevant 2
option math 2011 Barc 1Y 3 2 if relevant 1
fin jargon 2010 4Y #mvea 3 #know what’s relevant 2 2
finSys arch #abstract 2010 2Y 2 3 3

[1] Am more comfortable on whiteboard than other candidates.

Explanation marks means Surprise

%%algo trading dev xp !! HFT

Let’s not try to look like a HFT pro, citing your low-speed algo trading xp…. You could look desperate and clueless.

My list experiences don’t guarantee success, because devil is in the details (Just look at all the details in xtap retransmission…) However, I should feel more confident than an outsider.

  • [95G/Citi] OMS — In Citi, I worked on automatic quote cancellation and republish. Most executions involve partial fill and subsequent quote (limit order) reduction, similar to OMS. Citi and Baml were probably 2 biggest muni houses in the world. Baml system also handles corporate bonds.
  • [Citi] real time even-driven (tick-driven, curve driving…) quote repricing
  • [Citi] generating and updating limit orders in response to market data changes. Need to brush up on the basic principles since I was asked to elaborate, but a developer probably doesn’t need thorough understanding.
  • [95G] OMS — low-speed, low volume (bond) order management using very few FIX messages with trading venues to manage order state. I actually wrote my own wait/notify framework. This is probably the most valuable algo-trading project I have worked on.
  • [OCBC] simple automated option quote pricer in response to client RFQ
  • [95G] FIX messaging — up to 6-leg per order. Another ECN uses NewOrderSingle and Cancellation messages.
  • [RTS] FIX feeds — with heartbeat, logon etc
  • [NYSE] proprietary protocol is arguably harder than FIX
  • [NYSE] high volume, fault-tolerant raw market data draining at 370,000 messages/sec per thread
  • [NYSE] order book replication — based on incremental messages. I also implemented smaller order books from scratch, twice in coding interviews. This is directly relevant to algo trading
  • [NYSE] connection recovery, under very high messaging rate. Devil in the details.
  • SOR — no direct experience, but Citi AutoReo system has non-trivial logic for various conduits.
  • [Barclays] converting raw market data into soft market data such as curves and surfaces, essential to algo trading in bonds, FX-forwards, equity options.
  • [Stirt] real time ticking risk, useful to some traders if reliable
  • home-made FIX server/client
  • [Citi/Stirt] real time trade blotter — i added some new features
  • [Citi] very limited experience supporting an ETF algo trading system
  • [UChicago] school project — pair trading

However I feel these experiences are seen by hiring managers as insufficient. What are the gaps you see between my experience and the essential skills?

? limited OMS experience
? latency optimization
? network optimization

CHANNEL for multicast; TCP has Connection

In NYSE market data lingo, we say “multicast channel”.

  • analogy: TV channel — you can subscribe but can’t connect to it.
  • analogy: Twitter hashtag — you can follow it, but can’t connect to it.

“Multicast connectivity” is barely tolerable but not “connection”. A multicast end system joins or subscribes to a group. You can’t really “connect” to a group as there could be zero or a million different peer systems without a “ring leader” or a representative.

Even for unicast UDP, “connect” is the wrong word as UDP is connectionless.

Saying a nonsense like “multicast connection” is an immediate giveaway that the speaker isn’t familiar with UDP or multicast.

CV-competition: Sg 10x tougher than U.S.

Sg is much harder, so … I better focus my CV effort on the Sg/HK/China market.

OK U.S. job market is not easy, but statistically, my CV had a reasonable hit rate (like 20% at least) because

  • contract employers don’t worry about my job hopper image
  • contract employers have quick decision making
  • some full time hiring managers are rather quick
  • age…
  • Finally, the number of jobs is so much more than Sg

 

c++enum GTD tips: index page

c# static classes : java/c++

–c++:

use a (possibly nested) namespace to group related free functions. See google style guide.

c# has static classes. C++ offers something similar — P120 effC++. It’s a struct containing static fields. You are free to create multiple instances of this struct, but there’s just one copy for each field object. Kind of alternative design for a singleton.

This simulates a namespace.

–java:

In [[DougLea]] P86, this foremost OO expert briefly noted that it can be best practice to replace a java singleton with an all-static class

–c# is the most avant-garde on this front

  • C# static class can be stateful but rarely are
  • it can have a private ctor

find sub-array with exact target sum #O(N)#1pass clever

#include <iostream>

/* determines if the there is a subarray of arr[] with sum equal to 'sum' 
Nice and simple algorithm. Works for exact match only.
*/
void subArraySum(int const arr[], int const n, int const sum)
{
	int curr_sum = arr[0], le = 0, ri;

	/* Add elements one by one to curr_sum and if the curr_sum exceeds
the sum, then remove starting element */
	for (ri = 0; ri <= n-1; ri++) {
                /* If curr_sum exceeds the sum and sub-array isn't single,
then remove the starting elements*/
		while (curr_sum > sum && le < ri) {
			printf("curr_sum = %d too high. le = %d; ri = %d\n", curr_sum, le, ri);
			curr_sum = curr_sum - arr[le];
			le++;
		}

		if (curr_sum == sum) {
			printf("Sum found between indexes %d and %d\n", le, ri);
			return;
		}
		// now curr_sum is too small or sub-array is single
		curr_sum = curr_sum + arr[ri+1];
	}
	printf("No subarray found\n");
}
int main()
{
	int const arr[] = { 11,24, 2, 4, 8, 7 };
	int const n = sizeof(arr) / sizeof(arr[0]);
	int const sum = 7;
	subArraySum(arr, n, sum);
}

 

##thread cancellation techniques: java #pthread, c#

Cancellation is required when you decide a target thread should be told to give up halfway. Cancellation is a practical technique, too advanced for most IV.

Note in both java and c#, cancellation is cooperative. The requester (on it’s own thread) can’t force the target thread to stop.

C# has comprehensive support for thread cancellation (CancellationToken etc). Pthreads also offer cancellation feature. Java uses a numbers of simpler constructs, described concisely in [[thinking in java]]. Doug Lea discussed cancellation in his book.

* interrupt
* loop polling – the preferred method if your design permits.
* thread pool shutdown, which calls thread1.interrupt(), thread2.interrupt() …
* Future — myFuture.cancel(true) can call underlyingThread.interrupt()

Some blocking conditions are clearly interruptible — indicated by the compulsory try block surrounding the wait() and sleep(). Other blocking conditions are immune to interrupt.

NIO is interruptible but the traditional I/O isn’t.

The new Lock objects supports lockInterruptibly(), but the traditional synchronized() lock grab is immune to interrupt.

2 JGC algos for latency^throughput

https://databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-spark-applications.html is a 2015 Intel blog.

Before the G1 algo, Java applications typically use one of two garbage collection strategies: Concurrent Mark Sweep (CMS) garbage collection and ParallelOld GC (similar to parallelGC, the java8 default).

The former aims at lower latency, while the latter is targeted for higher throughput. Both strategies have performance bottlenecks: CMS GC does not do compaction, while Parallel GC performs only whole-heap compaction, which results in considerable pause times.

  1. For applications with real-time response, “generally” (by default) we recommend CMS GC;
  2. for off-line or batch programs, we use Parallel GC. In my experience, this 2nd scenario has less stringent requirements so no need to bother.

https://blog.codecentric.de/en/2013/01/useful-jvm-flags-part-6-throughput-collector/ (2013) has an intro on Throughput vs. pause times

Simple, clean, pure Multiple Inheritance..really@@

Update — Google style guide is strict on MI, but has a special exception on Windows.

MI can be safe and clean —

#1) avoid the diamond. Diamond is such a mess. I’d say don’t assume virtual base class is a vaccine

#2) make base classes imitate java interface … This is one proven way to use MI. Rememer Barcalys FI team. All pure virtual methods, No field, No big4 except empty virtual dtor.

#2a) Deviation: java8 added default methods to interfaces

#2b) Deviation: c++ private inheritance from one concrete base class , suggested in [[effC++]]

#3) simple, minimal, low-interference base classes. Say the 2 base classes are completely unrelated, and each has only 1 virtual method. Any real use case? I can’t think of any but when this situation arises i feel we should use MI with confidence and caution. Similarly “goto” could be put to good use once in a blue moon.

## personal xp on low latency trading

Thread — lockfree becomes relevant in latency-sensitive contexts
Thread — create opportunity for parallelism, and exploit multi-core
Thread — avoid locks including concurrent data structures and database. Be creative.
Data structures — STL is known to be fairly efficient but can affect performance
Data structures — minimize footprint
Data structures — favor primitive arrays because allocations are fewer and footprint is smaller
Algo — favor iterative over recursive
DB tuning — avoid hitting DB? Often impractical IMO
Serialize — favor primitives over java autoboxing
Serialize — favor in-process transfers, and bypass serialization
Mem — avoid vtbl for large volume of small objects
Mem — reuse objects, and avoid instantiation? Often impractical IMO
Mem — mem virtualization (like gemfire) as alternative to DB. A large topic.
Mem — Xms == Xmx, to avoid dynamic heap expansion
Mem — control size of large caches.
Mem — consider weak reference
MOM — multicast is the clear favorite to cope with current market data volume
MOM — shrink payload. FIX is a primary example.
GC — avoid forced GC, by allocating large enough heap
Socket — dedicate a thread to a single connection and avoid context switching
Socket — UDP can be faster than TCP
Socket — non-blocking can beat blocking if there are too many low-volume connections. See post on select()

bond duration(n KeyRateDuration) #learning notes 2

Jargon warning: yield is best written in bps/year, like 545bps/year. If you say 5.45% it gets ambiguous in some contexts such as modified duration. “1% rise in yield” could mean 2 things

– 5.45% —-> 6.45%,
– 5.45% –x-> 5.50% is a misunderstanding

This is not academic; this is real. Portfolio sensitivity to yield fluctuations is a key concern of banks on Wall St or Main St. It’s all about x bps change in yield. (From now on, always use bps to describe yield; avoid percentage.)

DV01 is dollar value of a “basis point”, free of any ambiguity.

DV01 and modified duration are 2 of the most widely used bond math numbers. Both are derived from bond cash flow.

Mac duration — definition — weighted average of wait time for the cash flows.
Mac duration — usage — not much in real world trading

Modified duration — definition — Mac duration modified “slightly”, by a tiny factor. REDUCED by (1+r)
Modified duration — usage — more useful than Mac duration. It measures price sensitivity to a yield shift, on a given bond.

For a simple example of a bond with modified duration of 5 years. 100 bps yield change results in a 5% dollar price change.

Key Rate Duration is an natural (and intuitive) extension of the duration concept, useful in MBS etc.